hadoop-ec2 launch-slaves my-cluster 5
to launch 5 additional slaves. It will take a few minutes for the slaves to join the cluster, but you don't need to do anything else.
Stopping nodes is more complicated.
hadoop-ec2 launch-slaves my-cluster 5
Path path = new Path("filename.of.sequence.file");
org.apache.hadoop.fs.RawLocalFileSystem fs = new org.apache.hadoop.fs.RawLocalFileSystem();
SequenceFile.Writer writer = new SequenceFile.Writer(fs, conf, path, Text.class, BytesWritable.class);
for(loop-through-data-here}
writer.append(new Text("key"), new BytesWritable("Value"));
hadoop-ec2 push my-cluster filename
hadoop-ec2 login my-cluster
hadoop jar /usr/local/hadoop-0.19.0/hadoop-0.19.0-examples.jar pi 4 10000
s3n://AWS_ID:AWS_SECRET_KEY@bucket/filename
hadoop fs -ls s3n://AWS_ID:AWS_SECRET_KEY@bucket
hadoop distcp s3n://AWS_ID:AWS_SECRET_KEY@bucket/ newDir
conf/hadoop-site.xml
<property>
<name>fs.s3n.awsAccessKeyId </name>
<value>YOUR_ACCESS_KEY_ID</value>
</property>
<property>
<name>fs.s3n.awsSecretAccessKey</name>
<value>THE_KEY_ITSELF</value>
</property>
ssh -D 2000 destination -N
src/contrib/ec2/binsubdirectory of hadoop to your path.
src/contrib/ec2/bin/hadoop-ec2-env.sh
hadoop-ec2 launch-cluster my-cluster 2
hadoop-ec2 launch-slaves my-cluster n
hadoop-ec2 terminate-cluster my-cluster