Trying to upload files with Multer to NodeJS app running on Kubernetes Pods (3 replicas) - javascript

I have a NodeJS app that is running on a Kubernetes cluster which has 3 pods replicas. I use Multer to upload the files to a specific folder, /public/upload with other folders in it. When I upload a profile photo, for example, it is saved on /public/upload/profiles. I want that for photos that are uploaded on one Pod, to show up on all running Pods, so for that reason I use a "Persistent Volume" and its respective "Persistent Volume Claim" connected to the /public/upload, but the uploaded photos are not showing up on all the Pods, only on the Pod that the upload was done. I leave below the whole code for Kubernetes Volume and VolumeClaim and the Multer app. I would like to have the files uploaded on one Pod to show up on all Pods, but I can't find any solution online. I thought that by creating a Persistent Volume this would be done automatically. Am I doing something wrong? Is there a configuration for this? Is it an error on configuring the Volumes?
Thank you very much for your help!
Volumes Deployment file:
kind: PersistentVolume
apiVersion: v1
metadata:
name: testapp-volume
labels:
type: local
spec:
capacity:
storage: 5Gi
storageClassName: manual
persistentVolumeReclaimPolicy: Delete
accessModes:
- ReadWriteOnce
hostPath:
path: "/app-folder/uploads"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: testapp-volumeclaim
namespace: testapp-namespace
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: manual
Pods and Service Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testapp-deployment
namespace: testapp-namespace
labels:
app: testapp-deployment
spec:
replicas: 3
selector:
matchLabels:
app: testapp-deployment
template:
metadata:
labels:
app: testapp-deployment
spec:
containers:
- name: testapp-container
image: /* Repository */
ports:
- containerPort: 2000
imagePullPolicy: Always
volumeMounts:
- mountPath: /public/uploads
name: testapp-volume
imagePullSecrets:
- name: testapp-secret
volumes:
- name: testapp-volume
persistentVolumeClaim:
claimName: testapp-volumeclaim
---
apiVersion: v1
kind: Service
metadata:
name: testapp-deployment
namespace: testapp-namespace
spec:
type: ClusterIP
selector:
app: testapp-deployment
ports:
- protocol: TCP
port: 5000
targetPort: 2000
Multer code:
var storage = multer.diskStorage({ //multers disk storage settings
destination: function (req, file, cb) {
cb(null, './public/uploads/' + req.body.folder)
},
filename: function (req, file, cb) {
var datetimestamp = Date.now();
var ext = path.extname(file.originalname);
ext = ext.toLowerCase();
console.log("FILE -> " + file.originalname + " ; RANDOM NUM -> " + generateRandomNumber(12) );
cb(null, generateRandomNumber(14) + '_' + datetimestamp + '' + ext);
}
});
var file_upload = multer({
storage: storage,
fileFilter: function (req, file, callback) {
var ext = path.extname(file.originalname);
ext = ext.toLowerCase();
if(ext !== '.png' && ext !== '.jpg' && ext !== '.jpeg') {
return callback(new Error('Only documents are allowed'));
}
callback(null, true)
},
limits:{
fileSize: 1024 * 1024 * 100
}
});

Related

UI5 Custom Middleware often cannot parse JSON bodies when accessed via Karma testing suite

Ok, this is a very specific problem I have been struggling with for over a week now. I am developing a SAP UI5 application where UI testing is done via Opa5/QUnit. Serving the application directly via npm does work without problems, however, using Karma (targeting a headless approach) two problems have surfaced which seem to be caused by the used custom Middleware:
res.status() / res.header() do not work (only native node methods like res.setHeader())
Using a body parser (no matter wheter express.json() or deprecated bodyparser.json()), the parser middleware seems to be working forever until the browser request fails after exact 20 or 40 seconds (Chrome only shows the "Stalled" timebar). This happens very often, but not always.
While there is a workaround for the first issue (but still - would be interesting to know why this happens only with Karma) I can't find a solution for the failing requests. I tried changing the browser Karma uses, changing from HTML to script mode, including several plugins and also analyzed packets via Wireshark because browsers show no difference at all between normal and Karma execution.
Through Wireshark I found out that the Karma browser keeps closing Websockets after requests are done while the normal browser doesn't (even when application is served via Karma). Also, in rare cases of working POST JSON requests, content length or processing time do not seem to have an effect.
karma.conf.js:
module.exports = function(config) {
"use strict";
config.set({
frameworks: ['ui5'],
reporters: ["progress"],
browsers: ["Chrome_without_security"],
ui5: {
mode: "html",
testpage: "webapp/test/integration/opaTests.qunit.html",
configPath: "ui5-testing.yaml",
},
customLaunchers: {
Chrome_without_security: {
base: 'Chrome',
flags: ['--disable-web-security', '--no-sandbox']
}
},
singleRun: true,
browserNoActivityTimeout: 400000,
//logLevel: config.LOG_DEBUG,
});
};
ui5-testing.yaml:
specVersion: '2.1'
metadata:
name: grunt-build
type: application
framework:
name: SAPUI5
version: "1.84.0"
libraries:
- name: sap.m
- name: sap.ui.core
- name: sap.ui.layout
- name: sap.ui.support
development: true
- name: sap.ui.table
- name: sap.ui.unified
#- name: sap.ui.model
- name: sap.ushell
development: true
- name: themelib_sap_fiori_3
optional: true
- name: themelib_sap_belize
optional: true
#- name: themelib_sap_bluecrystal
# optional: true
- name: sap.f
- name: sap.tnt
resources:
configuration:
paths:
webapp: /webapp
server:
customMiddleware:
- name: proxy
beforeMiddleware: serveResources
configuration:
testing: true
---
specVersion: '2.1'
kind: extension
type: server-middleware
metadata:
name: proxy
middleware:
path: lib/middleware/proxy.js
proxy.js:
const express = require('express')
module.exports = function ({
resources,
middlewareUtil,
options
}) {
require('dotenv').config();
const axios = require('axios')
var admin = require('firebase-admin');
const app = express();
...
app.use(express.json());
app.use((req, res, next) => { // Most POST Requests with application/json header do not enter this!
...
}
return app;
};
Requesting method (example):
upsert: function (aElements, iTimeout) {
let that = this;
return new Promise((resolve, reject) => {
let sBody = JSON.stringify(aElements);
let xhr = new XMLHttpRequest();
xhr.open('POST', UPSERT_URL, true);
xhr.onload = function (oResponse) {
that.proceedResponse(oResponse, this)
.then(() => resolve())
.catch(iStatus => reject(iStatus));
};
xhr.onerror = function (oError) {
reject(oError);
};
xhr.ontimeout = function (oError) {
console.error(`The request for ${UPSERT_URL} timed out.`);
reject(oError);
};
xhr.timeout = iTimeout;
xhr.setRequestHeader("Content-Type", "application/json");
xhr.send(sBody);
});
},
Normal call: ui5 serve -p 8080 -o /test/integration/opaTests.qunit.html --config ui5-testing.yaml
Karma call: karma start
Maybe someone is able to help me here, thank you very much!

Serve asset files in nginx using Kubernetes

I'm trying to deploy a pod to kubernetes using my node app and an nginx proxy server which should also serve my asset files.
I'm using two containers inside one pod for that. Below code runs the application correctly but asset files are not being served by nginx.
Below is my front-end-deployment.yaml files which takes care of creating the deployment for me. I'm wondering why nginx by this configurations doesn't not serve the static files?
apiVersion: v1
kind: ConfigMap
metadata:
name: mc3-nginx-conf
data:
nginx.conf: |
user nginx;
worker_processes 3;
error_log /var/log/nginx/error.log;
events {
worker_connections 10240;
}
http {
log_format main
'remote_addr:$remote_addr\t'
'time_local:$time_local\t'
'method:$request_method\t'
'uri:$request_uri\t'
'host:$host\t'
'status:$status\t'
'bytes_sent:$body_bytes_sent\t'
'referer:$http_referer\t'
'useragent:$http_user_agent\t'
'forwardedfor:$http_x_forwarded_for\t'
'request_time:$request_time';
access_log /var/log/nginx/access.log main;
upstream webapp {
server 127.0.0.1:3000;
}
server {
listen 80;
root /var/www/html;
location / {
proxy_pass http://webapp;
proxy_redirect off;
}
}
}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 3
selector:
matchLabels:
component: web
template:
metadata:
labels:
component: web
spec:
volumes:
- name: nginx-proxy-config
configMap:
name: mc3-nginx-conf
- name: shared-data
emptyDir: {}
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-proxy-config
mountPath: /etc/nginx/nginx.conf
subPath: nginx.conf
- name: shared-data
mountPath: /var/www/html
- name: frontend
image: sepehraliakbari/rtlnl-frontend:latest
volumeMounts:
- name: shared-data
mountPath: /var/www/html
lifecycle:
postStart:
exec:
command: ['/bin/sh', '-c', 'cp -r /app/build/client/. /var/www/html']
ports:
- containerPort: 3000

MalformedXML: The XML you provided was not well-formed or did not validate against our published schema

I am having this weird issue while working with AWS S3. I am working on application by which I can store the images to AWS bucket. Using Multer as middleware and S3FS library to connect and upload to AWS.
But the following error pops up when I try uploading the content.
"MalformedXML: The XML you provided was not well-formed or did not validate against our publis
hed schema"
Index.js
var express = require('express');
var router = express();
var multer = require('multer');
var fs = require('fs');
var S3FS = require('s3fs');
var upload = multer({
dest: 'uploads'
})
var S3fsImpl = new S3FS('bucket-name', {
region: 'us-east-1',
accessKeyId: 'XXXXXXXXXXXX',
secretAccessKey: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
});
/* GET home page. */
router.get('/', function (req, res, next) {
res.render('profile', {
title: 'Express'
});
});
router.post('/testupload', upload.single('file'), function (req, res) {
var file = req.file;
console.log(file);
var path = req.file.path;
var stream = fs.createReadStream(path);
console.log(stream);
S3fsImpl.writeFile(file.name, stream).then(function () {
fs.unlink(file.path, function (err) {
if (err) {
console.log(err);
}
});
res.redirect('/profile');
})
});
module.exports = router;
EDIT
Output:
{ fieldname: 'file',
originalname: '441_1.docx',
encoding: '7bit',
mimetype: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
destination: 'uploads',
filename: '662dcbe544804e4f50dfef1f52b40d22',
path: 'uploads\\662dcbe544804e4f50dfef1f52b40d22',
size: 13938 }
ReadStream {
_readableState:
ReadableState {
objectMode: false,
highWaterMark: 65536,
buffer: BufferList { head: null, tail: null, length: 0 },
length: 0,
pipes: null,
pipesCount: 0,
flowing: null,
ended: false,
endEmitted: false,
reading: false,
sync: true,
needReadable: false,
emittedReadable: false,
readableListening: false,
resumeScheduled: false,
defaultEncoding: 'utf8',
ranOut: false,
awaitDrain: 0,
readingMore: false,
decoder: null,
encoding: null },
readable: true,
domain: null,
_events: { end: [Function] },
_eventsCount: 1,
_maxListeners: undefined,
path: 'uploads\\662dcbe544804e4f50dfef1f52b40d22',
fd: null,
flags: 'r',
mode: 438,
start: undefined,
end: undefined,
autoClose: true,
pos: undefined,
bytesRead: 0 }
Package.json
{
"name": "aws-s3-images",
"version": "1.0.0",
"private": true,
"scripts": {
"start": "node ./bin/www"
},
"dependencies": {
"body-parser": "~1.17.1",
"connect-multiparty": "^2.0.0",
"cookie-parser": "~1.4.3",
"debug": "~2.6.3",
"express": "~4.15.2",
"hbs": "~4.0.1",
"morgan": "~1.8.1",
"multer": "^1.3.0",
"s3fs": "^2.5.0",
"serve-favicon": "~2.4.2"
},
"description": "AWS S3 uploading images",
"main": "app.js",
"devDependencies": {},
"keywords": [
"javascript"
],
"author": "reeversedev",
"license": "MIT"
}
The S3 restricts the file deletion 1000 per DeleteObjectsRequest. Hence after fetching all KeyVersions List, I gave the check if the keys are >1000, then partition the list into sublists and then pass it to the DeleteObjectsRequest with sublists as below-
if (keys.size() > 1000) {
int count = 0;
List<List> partition = ListUtils.partition(keys, 1000);
for (List list : partition) {
count = count + list.size();
DeleteObjectsRequest request = new DeleteObjectsRequest(
fileSystemConfiguration.getTrackingS3BucketName()).withKeys(list);
amazonS3Client.deleteObjects(request);
logger.info("Deleted the completed directory files " + list.size() + " from folder "
+ eventSpecificS3bucket);
}
logger.info("Deleted the total directory files " + count + " from folder " + eventSpecificS3bucket);
} else {
DeleteObjectsRequest request = new DeleteObjectsRequest(
fileSystemConfiguration.getTrackingS3BucketName()).withKeys(keys);
amazonS3Client.deleteObjects(request);
logger.info("Deleted the completed directory files from folder " + eventSpecificS3bucket);
}
I got this problem when use AmplifyJS library. Follow the document in AWS homepage about Multipart upload overview:
Whenever you upload a part, Amazon S3 returns an ETag header in its
response. For each part upload, you must record the part number and
the ETag value. You need to include these values in the subsequent
request to complete the multipart upload.
But S3 default config does not do it. Just go To Permissions tab ->
Add <ExposeHeader>ETag</ExposeHeader> into CORS Configuration.
https://github.com/aws-amplify/amplify-js/issues/61
So if anyone still facing this issue, in my case the issue only happens when you pass an empty array of objects you wanna delete, it causes the server to crash with the following error "MalformedXML".
const data: S3.DeleteObjectsRequest = {
Bucket: bucketName,
Delete: {
Objects: [], <<---here
},
}
return s3Bucket.deleteObjects(data).promise()
so just check if the array of Objects keys isn't equal to zero before sending that request to aws.
As per my knowledge just cross check the BUCKET NAME.
final PutObjectRequest putObjectRequest = new PutObjectRequest(**bucketName**, accessKeyId, is ,meta);
If you use ActiveStorage with Minio add force_path_style: true to your config
# config/storage.yml
minio:
service: S3
access_key_id: name
secret_access_key: password
endpoint: http://example.com:9000/
region: us-east-1
bucket: myapp-production
force_path_style: true # add this
input := &s3.DeleteObjectsInput{
Bucket: bucketName,
Delete: &s3.Delete{Objects: objs, // <- up to 1000 keys
Quiet: aws.Bool(false)},
}
I am using aws-sdk-go sdk, when the num of the key in objs is overer 1000,I have
get the same error like:
MalformedXML: The XML you provided was not well-formed or did not validate against our published schema.
The request contains a list of up to 1000 keys .
reference:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_DeleteObjects.html
For those which came here from talend, In my case cross check the tS3Put component's Bucket name and in the key part give any name which you want to see in the s3 as a uploaded file.
As I'm new to StackOverflow, I'm not allowed to attached images here. You can copy the below url to check it out. Thanks
https://i.stack.imgur.com/Q1pW0.png
This code should work for you. You need to remember that:
1) use unique bucket name
2) under your file object use 'originalname' instead of 'name' <-- this property does not exist
app.post('/testupload', function(req, res){
var file = req.files[0];
console.log(file.path);
console.log(file.name);
console.log('FIRST TEST: ' + JSON.stringify(file));
var stream = fs.createReadStream(file.path);
S3fsImpl.writeFile(file.originalname, stream).then(function ()
{
console.log('File has been sent - OK');
},
function(reason)
{
throw reason;
}
);
res.redirect('/index');
});
Can you try this code:
var S3fsImpl = new S3FS('bucket-name', {
region: 'us-east-1',
accessKeyId: 'XXXXXXXXXXXX',
secretAccessKey: 'XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX'
});
var fsImplStyles = S3fsImpl.getPath(file.name);
// Change us-east-1 for your region
var url = 'https://s3-us-east-1.amazonaws.com/' + fsImplStyles;
Send feedback, if this code works for you.

Not able to lift sails in production mode.(sails lift --prod --verbose not working)

I wanted to lift my sails aap in production mode.when i run sails lift --prod --verbose, i am getting bunch of errors.
My local.js file looks like this.
/**
* Local environment settings
*
* While you're developing your app, this config file should include
* any settings specifically for your development computer (db passwords, etc.)
* When you're ready to deploy your app in production, you can use this file
* for configuration options on the server where it will be deployed.
*
*
* PLEASE NOTE:
* This file is included in your .gitignore, so if you're using git
* as a version control solution for your Sails app, keep in mind that
* this file won't be committed to your repository!
*
* Good news is, that means you can specify configuration for your local
* machine in this file without inadvertently committing personal information
* (like database passwords) to the repo. Plus, this prevents other members
* of your team from commiting their local configuration changes on top of yours.
*
*
* For more information, check out:
* http://sailsjs.org/#documentation
*/
var config={
development:{
connections: {
mongo: {
adapter: 'sails-mongo',
host: 'localhost',
user: '',
password: '',
database: 'mydata',
schema: true
}
},
mailer:{
hostUrl:'http://localhost:1337/',
emailConfirm:'confirm/',
inviteMoreFriends:'myspace'
},
geoSpatial:{
radiusOfEarth:6375,
radius:3,
maxRecords:20
},
facebook:{
clientID: "CLIENT ID",
clientSecret: "SECRET",
callbackURL: "http://www.EXAMPLE.com:1337/auth/facebook/callback"
}
}
}
module.exports = {
// The `port` setting determines which TCP port your app will be deployed on
// Ports are a transport-layer concept designed to allow many different
// networking applications run at the same time on a single computer.
// More about ports: http://en.wikipedia.org/wiki/Port_(computer_networking)
//
// By default, if it's set, Sails uses the `PORT` environment variable.
// Otherwise it falls back to port 1337.
//
// In production, you'll probably want to change this setting
// to 80 (http://) or 443 (https://) if you have an SSL certificate
port: process.env.PORT || 1337,
// The runtime "environment" of your Sails app is either 'development' or 'production'.
//
// In development, your Sails app will go out of its way to help you
// (for instance you will receive more descriptive error and debugging output)
//
// In production, Sails configures itself (and its dependencies) to optimize performance.
// You should always put your app in production mode before you deploy it to a server-
// This helps ensure that your Sails app remains stable, performant, and scalable.
//
// By default, Sails sets its environment using the `NODE_ENV` environment variable.
// If NODE_ENV is not set, Sails will run in the 'development' environment.
environment: process.env.NODE_ENV || 'development',
development: {
//config is placed as the attributes needed by aws config node module
aws: {
region: 'REGION',
accessKeyId: 'KEY ID',
secretAccessKey: 'SECRET',
cloudFrontCDN: 'EXAMPLE.cloudfront.net'
},
s3: {
Bucket: 'MY_BUCKET',
endpoint: 'ENDPOINT',
imageUrl: 'URL'
},
uploads: {
thumbnails: __dirname + '/../uploads/thumbnails/'
}
},
likeprod: {
//config is placed as the attributes needed by aws config node module
aws: {
region: 'REGION',
accessKeyId: 'KEY ID',
secretAccessKey: 'SECRET',
cloudFrontCDN: 'EXAMPLE.cloudfront.net'
},
s3: {
Bucket: 'MY_BUCKET',
endpoint: 'ENDPOINT',
imageUrl: 'URL'
},
uploads: {
thumbnails: __dirname + '/../uploads/thumbnails/'
}
},
mandrillApiKey:"API_KEY",
twilio:{
accountSid:'SECRET',
authToken:'TOKEN'
},
metaPublic:{
groupBookNumber:'+0123456789'
},
connections:config[process.env.NODE_ENV].connections,
mailer:config[process.env.NODE_ENV].mailer,
geoSpatial:config[process.env.NODE_ENV].geoSpatial,
facebook:config[process.env.NODE_ENV].facebook,
//TODO: refactor the config[environment] as for connections
current: function () {
return sails.config[sails.config.environment]
}
};
when i run sails lift --prod. I am getting this error.
$ sails lift --prod --verbose
info: Starting app...
verbose: Please run `npm install coffee-script` to use coffescript (skipping for now)
verbose: Setting Node environment...
verbose: moduleloader hook loaded successfully.
verbose: Loading app config...
/home/vgulp/Desktop/config/local.js:136
connections:config[process.env.NODE_ENV].connections,
^
TypeError: Cannot read property 'connections' of undefined
at Object.<anonymous> (/home/Desktop/vka/config/local.js:136:45)
at Module._compile (module.js:456:26)
at Object.Module._extensions..js (module.js:474:10)
at Module.load (module.js:356:32)
at Function.Module._load (module.js:312:12)
at Module.require (module.js:364:17)
at require (module.js:380:17)
at /home/Desktop/vka/node_modules/sails/node_modules/include-all/index.js:129:29
at Array.forEach (native)
at requireAll (/home/Desktop/vka/node_modules/sails/node_modules/include-all/index.js:44:9)
at buildDictionary (/home/Desktop/vka/node_modules/sails/node_modules/sails-build-dictionary/index.js:68:14)
at Function.module.exports.aggregate (/home/Desktop/vka/node_modules/sails/node_modules/sails-build-dictionary/index.js:190:9)
at Array.loadOtherConfigFiles [as 0] (/home/Desktop/vka/node_modules/sails/lib/hooks/moduleloader/index.js:102:22)
at /home/Desktop/vka/node_modules/sails/node_modules/async/lib/async.js:459:38
at Array.forEach (native)
at _each (/home/Desktop/vka/node_modules/sails/node_modules/async/lib/async.js:32:24)
at Object.async.auto (/home/Desktop/vka/node_modules/sails/node_modules/async/lib/async.js:430:9)
Can anyone suggest a solution.
[ Edited: the following answer was based on the original question which was completely changed by the author ]
Your sails app need to lift in production mode or you need to specify the port to be used in your config files.
Production mode runs your express server on port 80.
Is your AWS instance setup to lift the app in production mode?
http://sailsjs.org/documentation/anatomy/my-app/config/env/production-js
You don 't have connection specified for production in local.js (As you are running from your Desktop)
As the error rightly says,
connections:config[process.env.NODE_ENV].connections,
^
TypeError: Cannot read property 'connections' of undefined
process.env.NODE_ENV is production when running in --prod
var config = {
development: {
connections: {
mongo: {
adapter: 'sails-mongo',
host: 'localhost',
user: '',
password: '',
database: 'mydata',
schema: true
}
},
mailer: {
hostUrl: 'http://localhost:1337/',
emailConfirm: 'confirm/',
inviteMoreFriends: 'myspace'
},
geoSpatial: {
radiusOfEarth: 6375,
radius: 3,
maxRecords: 20
},
facebook: {
clientID: "CLIENT ID",
clientSecret: "SECRET",
callbackURL: "http://www.EXAMPLE.com:1337/auth/facebook/callback"
}
},
production: {
connections: {
mongo: {
adapter: 'sails-mongo',
host: 'localhost',
user: '',
password: '',
database: 'mydata',
schema: true
}
},
mailer: {
hostUrl: 'http://localhost:1337/',
emailConfirm: 'confirm/',
inviteMoreFriends: 'myspace'
},
geoSpatial: {
radiusOfEarth: 6375,
radius: 3,
maxRecords: 20
},
facebook: {
clientID: "CLIENT ID",
clientSecret: "SECRET",
callbackURL: "http://www.EXAMPLE.com:1337/auth/facebook/callback"
}
}
}

Dojo intern set firefox profile name

Hi Iam trying to set firefox profile name in environment settings of intern config file.I have tried
environments: [
{ browserName: 'firefox',firefox_profile:'default' },
{firefox_profile:'default'}
],
and
environments: [
{ browserName: 'firefox',profile:'default' },
{profile:'default'}
],
as well as
capabilities: {
'selenium-version': '2.42.0',
firefox_profile:'default'
},
as mentioned in Selenium capabilities
But still firefox launches with an anonymous profile.
However if I use watir,
def setup
#browser = Watir::Browser.new :firefox, :profile => 'default'
goto_ecp_console_manage_page
end
browser launches the default profile which is 'kinit-ed'(kerberos)
As the Selenium capabilities page you mention points out, the value of firefox_profile must be a Base64-encoded profile. Specifically, you ZIP up a Firefox profile directory, Base64 encode it, and use that string as the value of firefox_profile. The firefox-profile npm package can make this process easier. You'll end up with something like:
environments: [
{ browserName: 'firefox', firefox_profile: 'UEsDBBQACAAIACynEk...'; },
...
],
I would recommend storing the profile string in a separate module since it's going to be around 250kb.
I used #jason0x43 suggestion to rely on the firefox-profile Node.js module and I've created the following grunt task fireforProfile4selenium. With a simple configuration set into the Gruntfile.js, the plugin writes a file on disk with the Base64 encoded version of a zipped profile!
Here is the grunt configuration:
firefoxProfile4selenium: {
options: {
proxy: {
host: '...',
port: ...
},
bypass: [ 'localhost', '127.0.0.1', '...' ]
},
default: {
files: [{
dest: 'firefoxProfile.b64.txt'
}]
}
}
Here is the plugin:
/*global require, module*/
var fs = require('fs'),
FirefoxProfile = require('firefox-profile'),
taskName = 'firefoxProfile4selenium';
module.exports = function (grunt) {
'use strict';
grunt.registerMultiTask(taskName, 'Prepares a Firefox profile for Selenium', function () {
var done = this.async(),
firefoxProfile = new FirefoxProfile(),
options = this.options(),
host = this.options().proxy.host,
port = this.options().proxy.host,
bypass = this.options().bypass,
dest = this.files[0].dest;
// Set the configuration type for considering the custom settings
firefoxProfile.setPreference('network.proxy.type', 2);
// Set the proxy host
firefoxProfile.setPreference('network.proxy.ftp', host);
firefoxProfile.setPreference('network.proxy.http', host);
firefoxProfile.setPreference('network.proxy.socks', host);
firefoxProfile.setPreference('network.proxy.ssl', host);
// Set the proxy port
firefoxProfile.setPreference('network.proxy.ftp_port', port);
firefoxProfile.setPreference('network.proxy.http_port', port);
firefoxProfile.setPreference('network.proxy.socks_port', port);
firefoxProfile.setPreference('network.proxy.ssl_port', port);
// Set the list of hosts that should bypass the proxy
firefoxProfile.setPreference('network.proxy.no_proxies_on', bypass.join(','));
firefoxProfile.encoded(function (zippedProfile) {
fs.writeFile(dest, zippedProfile, function (error) {
done(error); // FYI, done(null) reports a success, otherwise it's a failure
});
});
});
};

Categories