-
-
Notifications
You must be signed in to change notification settings - Fork 355
Supported Clients
Here are some of the snippets for various S3 clients in different languages.
AWS-S3
AWS::S3::Base.establish_connection!(
:access_key_id => "123",
:secret_access_key => "abc",
:server => "localhost",
:port => "10001" )
Right AWS
RightAws::S3Interface.new('1E3GDYEOGFJPIT7','hgTHt68JY07JKUY08ftHYtERkjgtfERn57',
{:multi_thread => false, :server => 'localhost',
:port => 10453, :protocol => 'http',:no_subdomains => true }
AWS-SDK
AWS::S3.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:s3_endpoint => 'localhost',
:s3_port => 10001,
:use_ssl => false)
If you've disabled SSL as part of an AWS.config
call and attempt to use services that have not been redirected (such as STS) you will need to enable SSL for those services. Note that this configuration has not been extensively tested with non-S3 services from the AWS-SDK gem.
I would recommend using a hostname other than localhost. You will need to create DNS entries for somebucket.s3_endpoint
in order to use fakes3.
As an alternative to creating DNS entries, at least with aws-sdk, you can use a configuration like so:
AWS::S3.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:s3_endpoint => 'localhost',
:s3_force_path_style => true,
:s3_port => 10001,
:use_ssl => false)
AWS-SDK V2
Aws::S3::Client.new(
:access_key_id => 'YOUR_ACCESS_KEY_ID',
:secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
:region => 'YOUR_REGION',
:endpoint => 'http://localhost:10001/',
:force_path_style => true)
Fog
connection = Fog::Storage::AWS.new(aws_access_key_id: 123, aws_secret_access_key: "asdf", port: 10001, host: 'localhost', scheme: 'http')
I also needed the following monkeypatch to make it work.
require 'fog/aws/models/storage/files'
# fog always expects Last-Modified and ETag headers to present
# We relax this requirement to support fakes3
class Fog::Storage::AWS::Files
def normalise_headers(headers)
headers['Last-Modified'] = Time.parse(headers['Last-Modified']) if headers['Last-Modified']
headers['ETag'].gsub!('"','') if headers['ETag']
end
end
AWS SDK
Clone from S3_Uploader
Modify S3UploaderActivity.java
s3Client.setEndpoint("http://your-server-ip");
Change ACCESS_KEY_ID and SECRET_KEY in Constants.java
AWS SDK
BasicAWSCredentials credentials = new BasicAWSCredentials("foo", "bar");
AmazonS3Client s3Client = new AmazonS3Client(credentials);
s3Client.setEndpoint("http://localhost:4567");
s3Client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));
If you do not set path style access (and use the default virtual-host style), you will have to set up your DNS or hosts file to contain subdomain buckets. On Unix, edit /etc/hosts and add:
127.0.0.1 bucketname.localhost
s3cmd
For S3 cmd you need to setup your dns to contain subdomain buckets since it doesn't do path style S3 requests. You can use a config like this to make it work. Gist
Then just run
s3cmd -c myconfig mb s3://my_bucket
Knox
var knox = require('knox');
knox.createClient({
key: '123',
secret: 'abc',
bucket: 'my_bucket',
endpoint: 'localhost',
style: 'path',
port: 10001
});
aws-sdk
$ npm install --save aws-sdk
var fs = require('fs')
var AWS = require('aws-sdk')
var config = {
s3ForcePathStyle: true,
accessKeyId: 'ACCESS_KEY_ID',
secretAccessKey: 'SECRET_ACCESS_KEY',
endpoint: new AWS.Endpoint('http://localhost:10001')
}
var client = new AWS.S3(config)
var params = {
Key: 'Key',
Bucket: 'Bucket',
Body: fs.createReadStream('./image.png')
}
client.upload(params, function uploadCallback (err, data) {
console.log(err, data)
})