Skip to content
This repository has been archived by the owner on Feb 12, 2024. It is now read-only.

Supported Clients

matthewhubble edited this page Jun 3, 2014 · 34 revisions

Here are some of the snippets for various S3 clients in different languages.

Ruby

AWS-S3

AWS::S3::Base.establish_connection!(
       :access_key_id => "123",
       :secret_access_key => "abc",
       :server => "localhost",
       :port => "10001" )

Right AWS

 RightAws::S3Interface.new('1E3GDYEOGFJPIT7','hgTHt68JY07JKUY08ftHYtERkjgtfERn57',
                       {:multi_thread => false, :server => 'localhost',
                        :port => 10453, :protocol => 'http',:no_subdomains => true }

AWS-SDK

AWS::S3.new(
    :access_key_id => 'YOUR_ACCESS_KEY_ID',
    :secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
    :s3_endpoint => 'localhost',
    :s3_port => 10001,
    :use_ssl => false)

If you've disabled SSL as part of an AWS.config call and attempt to use services that have not been redirected (such as STS) you will need to enable SSL for those services. Note that this configuration has not been extensively tested with non-S3 services from the AWS-SDK gem.

I would recommend using a hostname other than localhost. You will need to create DNS entries for somebucket.s3_endpoint in order to use fakes3.

As an alternative to creating DNS entries, at least with aws-sdk, you can use a configuration like so:

AWS::S3.new(
    :access_key_id => 'YOUR_ACCESS_KEY_ID',
    :secret_access_key => 'YOUR_SECRET_ACCESS_KEY',
    :s3_endpoint => 'localhost',
    :s3_force_path_style => true,
    :s3_port => 10001,
    :use_ssl => false)

Fog

connection = Fog::Storage::AWS.new(aws_access_key_id: 123, aws_secret_access_key: "asdf", port: 10001, host: 'localhost', scheme: 'http')

I also needed the following monkeypatch to make it work.

  require 'fog/aws/models/storage/files'

  # fog always expects Last-Modified and ETag headers to present
  # We relax this requirement to support fakes3
  class Fog::Storage::AWS::Files
    def normalise_headers(headers)
      headers['Last-Modified'] = Time.parse(headers['Last-Modified']) if headers['Last-Modified']
      headers['ETag'].gsub!('"','') if headers['ETag']
    end
  end

Android

AWS SDK

Clone from S3_Uploader

Modify S3UploaderActivity.java

s3Client.setEndpoint("http://your-server-ip");

Change ACCESS_KEY_ID and SECRET_KEY in Constants.java

Java

AWS SDK

BasicAWSCredentials credentials = new BasicAWSCredentials("foo", "bar")
AmazonS3Client s3Client = new AmazonS3Client(credentials)
s3Client.setEndpoint("http://localhost:4567")
s3Client.setS3ClientOptions(new S3ClientOptions().withPathStyleAccess(true));

If you do not set path style access (and use the default virtual-host style), you will have to set up your DNS or hosts file to contain subdomain buckets. On Unix, edit /etc/hosts and add:

127.0.0.1 bucketname.localhost

Command Line Tools

s3cmd

For S3 cmd you need to setup your dns to contain subdomain buckets since it doesn't do path style S3 requests. You can use a config like this to make it work. Gist

Then just run s3cmd -c myconfig mb s3://my_bucket

Clone this wiki locally