The FeralHosting documentation says to do:
screen -S rtorrent rtorrent
and then ctrl+a,d. Hover, this resulted in me getting the error “No other window.”.
Instead do this:
screen -d -m -S rtorrent rtorrent
The FeralHosting documentation says to do:
screen -S rtorrent rtorrent
and then ctrl+a,d. Hover, this resulted in me getting the error “No other window.”.
Instead do this:
screen -d -m -S rtorrent rtorrent
If you see this error after migrating from the v2 to v3 sdk change:
$s3client->deleteObjects( array(
'Bucket' => $settings['bucket'],
'Objects' => $fileKeys
) );
to:
$s3client->deleteObjects( array(
'Bucket' => $settings['bucket'],
'Delete' => array(
'Objects' => $fileKeys
)
) );
Note the new Delete array wrapping Objects. I discovered this while updating BackupBuddy.
If you’re upgrading from the v2 to v3 SDK or following the low level API documentation for Multipart uploads such as http://docs.aws.amazon.com/AmazonS3/latest/dev/LLuploadFilePHP.html you will run into an impassable error. I ran into this problem while working on adding the new S3 SDK into BackupBuddy.
Error executing “CompleteMultipartUpload” on “https://s3.amazonaws.com/[…]”; AWS HTTP error: Client error: `POST https://s3.amazonaws.com/[…]` resulted in a `400 Bad Request` response:
InvalidRequest
The problem is that the CompleteMultipartUpload function parameters have changed but Amazon has not updated their documentation properly and it is missing from their migration guide.
The “Parts” parameter now needs to be placed in a new array named “MultipartUpload”.
Incorrect parameters (as per documentation or v2 SDK):
$result = $s3->completeMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $uploadId,
'Parts' => $parts,
));
Correct parameters for v3 SDK:
$result = $s3->completeMultipartUpload(array(
'Bucket' => $bucket,
'Key' => $key,
'UploadId' => $uploadId,
'MultipartUpload' => array(
'Parts' => $parts,
),
));
I’ve posted a post in the S3 forums ( https://forums.aws.amazon.com/thread.jspa?messageID=740339򴯳 ) requesting Amazon update their documentation so this may be fixed there at some point. Special thanks to this StackOverflow post: http://stackoverflow.com/questions/30383982/aws-s3-completemultipartupload-error
Settings up a Lambda Function on Amazon Web Services to properly accept and validate an incoming Twilio text message (SMS) is much harder than it should be. Here is a very basic configuration to pass through the submitted form data from Twilio as well as the Twilio validation header to your NodeJS script for processing. The first body mapping template converts all parameters (post vars for POST method, querystring for GET) into keys in your event object passed into your NodeJS Lambda function. Additionally I’m passing Twilio’s X-Twilio-Signature HTTP header through as “event.Signature” so you can properly validate the request as being from Twilio for full security. After processing we response with the XML response and pass it through the API gateway as-is back to Twilio through the use of an Integration Response and its Body Mapping Template. Overly complicated, I know, but this is the simplest I’ve figured out to do it and it seems to match even official Twilio tutorials for similar actions.
Content type: application/x-www-form-urlencoded
Mapping Template Code:
## convert HTML POST data or HTTP GET query string to JSON
## get the raw post data from the AWS built-in variable and give it a nicer name
#if ($context.httpMethod == "POST")
#set($rawAPIData = $input.path("$"))
#elseif ($context.httpMethod == "GET")
#set($rawAPIData = $input.params().querystring)
#set($rawAPIData = $rawAPIData.toString())
#set($rawAPIDataLength = $rawAPIData.length() - 1)
#set($rawAPIData = $rawAPIData.substring(1, $rawAPIDataLength))
#set($rawAPIData = $rawAPIData.replace(", ", "&"))
#else
#set($rawAPIData = "")
#end
## first we get the number of "&" in the string, this tells us if there is more than one key value pair
#set($countAmpersands = $rawAPIData.length() - $rawAPIData.replace("&", "").length())
## if there are no "&" at all then we have only one key value pair.
## we append an ampersand to the string so that we can tokenise it the same way as multiple kv pairs.
## the "empty" kv pair to the right of the ampersand will be ignored anyway.
#if ($countAmpersands == 0)
#set($rawPostData = $rawAPIData + "&")
#end
## now we tokenise using the ampersand(s)
#set($tokenisedAmpersand = $rawAPIData.split("&"))
## we set up a variable to hold the valid key value pairs
#set($tokenisedEquals = [])
## now we set up a loop to find the valid key value pairs, which must contain only one "="
#foreach( $kvPair in $tokenisedAmpersand )
#set($countEquals = $kvPair.length() - $kvPair.replace("=", "").length())
#if ($countEquals == 1)
#set($kvTokenised = $kvPair.split("="))
#if ( ( $kvTokenised[0].length() > 0 ) && ( $kvTokenised[1].length() > 0 ) )
## we found a valid key value pair. add it to the list.
#set($devNull = $tokenisedEquals.add($kvPair))
#end
#end
#end
## next we set up our loop inside the output structure "{" and "}"
{
## Pass Twilio signature header through to event.Signature.
"Signature": "$input.params('X-Twilio-Signature')",
## Pass all valid params through into event var.
#foreach( $kvPair in $tokenisedEquals )
## finally we output the JSON for this pair and append a comma if this isn't the last pair
#set($kvTokenised = $kvPair.split("="))
"$util.urlDecode($kvTokenised[0])" : #if($kvTokenised.size() > 1 && $kvTokenised[1].length() > 0)"$util.urlDecode($kvTokenised[1])"#{else}""#end#if( $foreach.hasNext ),#end
#end
}
TIP: After setting up your mapping template don’t forget to DEPLOY your new changes!
var twilio = require('twilio');
twilio_auth_token = 'YOUR_AUTH_TOKEN_HERE';
this_api_url = 'LAMBDA_API_URL_HERE'; // Should match the full URL called by Twilio. Best to include trailing slash in both.
exports.handler = function(event, context, callback) {
console.log( 'Parameters:' );
console.log( event );
var twilio_params = JSON.parse( JSON.stringify( event ) ); // Clone event into new var so we can remove Signature param. Can't just copy var since it copies by reference.
delete twilio_params.Signature; // Held the X-Twilio-Signature header contents.
if ( true !== twilio.validateRequest( twilio_auth_token, event.Signature, this_api_url, twilio_params ) ) {
console.log( 'Twilio auth failure.' );
return callback( 'Twilio auth failure.' );
} else {
console.log( 'Twilio auth success.' );
}
twiml = new twilio.TwimlResponse();
// twiml.message( 'Uncomment this line to send this as a text message response back to the sender.' );
return callback( null, twiml.toString() ); // Success.
};
Set the Method Response 200 HTTP Status Response Model to a Content type of application/xml
instead of the default application/json
.
You need to set the 200 response Body Mapping Template to directly output your XML directly. Set a Body Mapping Template of Content-type application/xml
and for the template content use the following. Since we are using the twimlResponse() method to build the XML we don’t need to modify the response in any way before passing it out.
#set($inputRoot = $input.path('$'))
$inputRoot
https://forums.aws.amazon.com/thread.jspa?messageID=725034
https://www.twilio.com/docs/api/security
http://twilio.github.io/twilio-node/#validateRequest
http://stackoverflow.com/questions/728360/most-elegant-way-to-clone-a-javascript-object
https://www.twilio.com/blog/2015/09/build-your-own-ivr-with-aws-lambda-amazon-api-gateway-and-twilio.html
You see something like the following when trying to run your NodeJS script:
Error: [...]/node_modules/epoll/build/Release/epoll.node: invalid ELF header
You’re using a Raspberry Pi (in my case 2 B) with Raspbian with the built-in Node v0.10.29 which is missing a UTF8 patch. See my other post HERE for the solution if you have the following error (you’ll just need to upgrade to v0.12.x or newer):
../node_modules/nan/nan.h:328:47: error: 'REPLACE_INVALID_UTF8' is not a member of 'v8::String'
static const unsigned kReplaceInvalidUtf8 = v8::String::REPLACE_INVALID_UTF8;
You copied your node_modules directory over from your computer to the Raspberry Pi. npm needs to compile some node modules specially for the Raspberry Pi, so simply copying over the modules won’t always work.
Install your node modules on the Pi itself via npm, not just copy the node_modules directory over. This is because things need to be compiled slightly differently when running on the Pi. In my case epoll is a submodule of the repo “onoff”, so in my case I’d do the following on the Raspberry Pi where I want it installed:
npm install onoff
Note: If you don’t have npm installed on your Raspberry Pi yet, do the following:
sudo apt-get install nodejs npm
The current distribution of node (v0.10.29) packaged with the Debian distro that comes with the Raspberry Pi 2 B is missing a patch related to UTF8. Because of this you may encounter packages which are unable to compile. You’ll see the following before various errors and eventual failure of installing a package via npm. In my case this failed when trying to install the “onoff” package which relied on “epoll”.
../node_modules/nan/nan.h:328:47: error: 'REPLACE_INVALID_UTF8' is not a member of 'v8::String'
static const unsigned kReplaceInvalidUtf8 = v8::String::REPLACE_INVALID_UTF8;
Install a newer version of Node, such as v0.12.x:
curl -sL https://deb.nodesource.com/setup_0.12 | sudo -E bash -
sudo apt-get install -y nodejs
Verify this worked with:
node --version
That’s it! You should now be ready to resume installing your packages. Enjoy.
Catchable fatal error: Argument 2 passed to Guzzle\Service\Client::getCommand() must be of the type array, string given, called in ../_s3lib2/Guzzle/Service/Client.php on line 76 and defined in ../_s3lib2/Guzzle/Service/Client.php on line 79
If you’re getting this error when working with the version 2 PHP SDK for Amazon Web Services’ S3 service, fear not. It’s probably as simple as you passing the wrong parameters into the function. In this example I was calling listObjects() using the old SDK’s parameters rather than the new SDK’s. Simply fix!
Wrong (eg. old SDK v1 way):
$response = self::$_client->listObjects( $settings['bucket'] );
Right (eg. new SDK v2 way):
$response = self::$_client->listObjects( array( 'Bucket' => $settings['bucket'], 'Prefix' => $prefix ) );
All fixed!
Amazon Web Service’s default nginx configuration does not have websocket support enabled by default. I am running SailsJS with Socket.io on NodeJS with the default nginx proxy and discovered that websockets were failing to connect. Creating a directory named .ebextensions in the root of my application and a file named 01_files.config within it with the configuration below solved the problem as it instructs nginx to pass websockets through. Just drop this in, deploy your app, and you should be good to go with your websockets functioning.
files:
"/etc/nginx/conf.d/websocketupgrade.conf" :
mode: "000755"
owner: root
group: root
content: |
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
I searched high and low for a solution to this problem and after finding several similar but functioning solutions I combined them to get the above.
Here’s a (very) quick and dirty overview of the steps. Follow everything below in order and you should have your node.js app (optionally running SailsJS) deployed on a dokku server on DigitalOcean within a few minutes.
Useful Links:
* Dokku – https://github.com/progrium/dokku
* MariaDB plugin – https://github.com/Kloadut/dokku-md-plugin
Set your SSH key into DigitalOcean if you haven’t already
* cat ~/.ssh/id_rsa.pub | pbcopy
* Paste this key into digitalocean control panel.
* Select 1GB
* San Fran
* Dokku
* Select your existing SSH key.
* Virtio + backups (maybe virt network if need to connect)
* Visit in your browser: http://YOUR-SERVER-IP/
* Submit, optionally setting the app URL to use subdomains. Most settings should default to correct at this point.
* cat ~/.ssh/id_rsa.pub | ssh [email protected] "sudo sshcommand acl-add dokku YOUR-APP-NAME"
** Note that if instead of YOUR-APP-NAME you put a full domain it will use that as the URL instead of setting up a subdomain. Eg api.dustinbolton.com instead of dustinbolton-api
Initialize the local repo if you have not already
* git init && git add -A && git commit -m "Initial commit"
Assign production (or staging) destination:
* git remote add production [email protected]:YOUR-APP-NAME
Create file to tell server what to run on deployment:
* touch Procfile && open Procfile
* Add the following into this new file & save:
* web: sails lift
** For non-sails framework, instead of “sails lift” this would be “node app.js”.
* git push -u production master
<3>Done!
If you need to manually re-run the app or see the output of the run attempt (such as to troubleshoot):
* ssh [email protected]
cd /home/dokku/YOUR-APP-NAME
dokku run YOUR-APP-NAME sails lift
(or instead of “sails lift”, “node app.js”)
Adapted from the guide at:
http://matthewpalmer.net/blog/2014/02/19/how-to-deploy-node-js-apps-on-digitalocean-with-dokku/
cd /var/lib/dokku/plugins
git clone https://github.com/Kloadut/dokku-md-plugin mariadb
#git clone https://github.com/musicglue/dokku-user-env-compile.git user-env-compile
dokku plugins-install
dokku mariadb:create YOUR-APP-NAME
* This creates the mariadb instance and links it to your app automatically since the name matches. The database name is “db” by default.
You can use the credentials displayed or access them via environmental variables such by: process.env.DB_USER
Database environment variables should automatically exist but I have seen them drop off. I have not found the cause yet and have decided to manually set the one(s) I need for the time being.
dokku config:set YOUR-APP-NAME DATABASE_URL=whateverhere
If you ever need to restart your app. Only change “YOUR-APP-NAME”:
docker restart `cat /home/dokku/YOUR-APP-NAME/CONTAINER`
Say you are trying to submit an HTML form via jQuery. If you’re trying to submit it using .submit() or .trigger(‘submit’) you may have ran into a very strange problem where nothing happens for no reason you can figure out. Here is a very basic example below. Why on Earth is this not working?
[code lang=”js” highlight=”12″]
<script type="text/javascript">
jQuery(‘#my-form-submit’).click( function(e) {
e.preventDefault();
// Do some stuff here…
jQuery( ‘#my-form’).submit();
});
</script>
<form action="?ajax=4" method="post" id="my-form">
<input type="submit" name="submit" value="Submit this form!" id="my-form-submit">
</form>
[/code]
Browsers reserve some key words for input names, such as “submit”. As you see in line 12 we have name=”submit”. This simple thing is the problem. Just change name=”submit” to name=”my-submit”:
Does not play nicely with jQuery triggering of submit:
[code lang=”js” highlight=”12″ firstline=”12″]
<input type="submit" name="submit" value="Submit this form!" id="my-form-submit">
[/code]
Works:
[code lang=”js” highlight=”12″ firstline=”12″]
<input type="submit" name="my-submit" value="Submit this form!" id="my-form-submit">
[/code]
Unfortunately browsers offer no error or exlanation to the developer as to why triggering the submit is not doing anything. This can be quite frustrating until you figure this tiny aspect out.