This is about how operations are performed using AWS JavaScript libraries to access AWS S3 or similar services.
While S3 was launched long time ago by Amazon, nowadays almost all cloud providers have a similar service, allowing the use of existing code to access those services, with minimum configuration changes.
The configuration changes (endpoint) are presented here S3 Linode. This article is how to use version v3 of aws-sdk in order to access those services
Dependency
The dependency is this: @aws-sdk/client-s3
Imports for our example
import { DeleteObjectCommand, DeleteObjectCommandInput, GetObjectCommand, GetObjectCommandInput, PutObjectCommand, S3Client } from '@aws-sdk/client-s3' import fs from 'fs'
Client instance
An instance of the client is needed for any access, and here is how you create an instance of the client:
const getClient = () : S3Client =>{ // Set your AWS credentials from environment variables const accessKeyId = process.env['key']!!; const secretAccessKey = process.env['secret']!!; const endpoint: string = 'https://us-east-1.linodeobjects.com'; const region: string = 'us-east-1'; // Configure the S3 client const credentials = {secretAccessKey, accessKeyId} return new S3Client({region, endpoint, credentials}); }
Note that the credentials are taken from the environment. No environment means the implementation looks in the normal places where AWS configuration is found. Note the endpoint, this makes the difference between going to Amazon or to an alternate cloud provider. In our case it is Linode
Uploading object
Don’t pay attention to the fs dependency: we have an example here of how to upload a file, however any stream of data can be processed the same
const putObject = async () => { const s3Client = getClient(); const bucketName = 'bucket-storage'; const filePath = 'c:\\temp\\alfa.dat'; // Change this to the path of your file const fileStream = fs.createReadStream(filePath); // Specify the parameters for the upload const uploadParams = { Bucket: bucketName, Key: 'alfa.dat', // Set the key (file name) under which you want to store the file in S3 Body: fileStream, }; const command = new PutObjectCommand(uploadParams); const response = await s3Client.send(command); console.log(response); }
Delete object
const deleteObject = async() => { const s3Client = getClient(); const bucketName = 'bucket-storage'; // Specify the parameters for the upload const deleteObjectParam : DeleteObjectCommandInput = { Bucket: bucketName, Key: 'alfa.dat' }; const command = new DeleteObjectCommand(deleteObjectParam); const response = await s3Client.send(command); console.log(response); }
Get object
Note that the object comes in chunks so it has to be processed asynchronously; this eliminates the need to get all the data, putting it in temporary storage and then take it from there
const getObject = async () => { const s3Client = getClient(); const bucketName = 'bucket-storage'; const getObjectParam : GetObjectCommandInput = { Bucket: bucketName, Key: 'alfa.dat' } const command: GetObjectCommand = new GetObjectCommand(getObjectParam); const response = await s3Client.send(command); const body = response.Body!!; const fileStream = fs.createWriteStream('c:\\temp\\info.dat') await processBody(body, (chunk) => {fileStream.write(chunk)}); fileStream.close(); } const processBody = async (body: any, callback: (bytes: Uint8Array) => void) : Promise<void> => { return new Promise((resolve, reject) => { body.on('data', (chunk: Uint8Array) => { callback(chunk); }); body.once('end', () => { resolve() }) body.once('error', (err : any) => { reject(err) }) }); }
In this case, method processBody actually listens to the events that are triggered by the response.Body and assist in processing them asynchronously.