<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[The Node Den]]></title><description><![CDATA[Exploring emerging web technologies]]></description><link>http://seanvbaker.com/</link><generator>Ghost 0.5</generator><lastBuildDate>Fri, 24 Apr 2026 18:08:23 GMT</lastBuildDate><atom:link href="http://seanvbaker.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Upgrading Ghost to 0.5]]></title><description><![CDATA[<p>Ghost just released a new version with a number of nice features and improvements. You can read more here:</p>

<p><a href='http://blog.ghost.org/ghost-0-5/' >http://blog.ghost.org/ghost-0-5/</a></p>

<p>If you might have used a Ghost blog site dev workflow such as I explained in my <a href='http://seanvbaker.com/a-ghost-workflow/' >Ghost Workflow</a> post, it's nice to know that nothing in the new version of Ghost breaks the approach I used.</p>

<p><em>Note that I have only done the upgrade process as decribed below for upgrading from Ghost 0.4 installed on Ubuntu as I describe in my <a href='http://seanvbaker.com/a-ghost-workflow/' >Ghost Workflow</a> post.</em></p>

<p>It's always a little stressful updating your production sites - especially when you have put in some of your own process around the install. I was very happy that this install went so smooth. For reference, here are the exact as steps I followed. This process is based on Ghost's <a href='http://support.ghost.org/how-to-upgrade/' >helpful documentation</a>.</p>

<blockquote>
  <p><strong>Important Note:</strong> <em>Be sure to backup your Ghost data (blog posts and settings) before starting any upgrade process using the Ghost export tool as explained on the <a href='http://support.ghost.org/how-do-i-upgrade-my-self-install-version-of-ghost/' #backing-up">Ghost Backup Instructions</a> page. You need to also copy any custom theme directories you might want to backup, although these should be in your Git repo if you're using the same workflow approach as I am.</em></p>
</blockquote>

<h3 id="localmacdevinstallupgrade">Local Mac dev install upgrade</h3>

<p>First, download the latest version of Ghost from the <a href='https://ghost.org/download/' >Ghost download page</a>.</p>

<p>Now extract the files:</p>

<pre><code>cd ~/downloads
unzip ghost-0.5.0.zip -d ghost-0.5.0
</code></pre>

<p>Copy the following files to your Ghost install:</p>

<pre><code>cd ghost-0.5.0
cp *.js *.json *.md LICENSE ~/yourblog
</code></pre>

<p>Remove the <code>core</code> directory from your blog, copy in the new core directory, and copy in the new casper theme:</p>

<pre><code>rm -rf ~/yourblog/core
cp -R core ~/yourblog
cp -R content/themes/casper ~/yourblog/content/themes
</code></pre>

<p>Now you can do the new Ghost install:</p>

<pre><code>cd ~/yourblog
npm install --production
npm start
</code></pre>

<h3 id="ubuntuproductionupgrade">Ubuntu production upgrade</h3>

<p>Download the latest version of Ghost:</p>

<pre><code>cd /home/git/tmp
wget http://ghost.org/zip/ghost-0.5.0.zip
</code></pre>

<p>If you set up Ghost to run as an Upstart service, stop your ghost service using <code>stop yourblogservicename</code>. Remember to be logged in as root here, or a user that has permissions to start/stop this service.</p>

<p>Now, as <code>git</code> user, remove the core directory from your blog intsall and copy in the new Ghost files:</p>

<pre><code>su git
cd /home/git/yourblog
rm -rf core
cd /home/git/tmp
unzip -uo ghost-0.5.0.zip -d /home/git/yourblog
</code></pre>

<p>Now you're ready to do the install:</p>

<pre><code>cd /home/git/yourblog
npm install --production
</code></pre>

<p>You can not test it using <code>npm start</code>, or just restart your blog service (as root) if you have one:</p>

<pre><code>start yourblogservicename
</code></pre>

<p>Now check out all the new features Ghost 0.5 has to offer!</p>]]></description><link>http://seanvbaker.com/upgrading-ghost-to-0-5/</link><guid isPermaLink="false">58d806f6-b20d-4aa5-a579-a001955f5a62</guid><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Sun, 17 Aug 2014 18:35:11 GMT</pubDate></item><item><title><![CDATA[Moving user state to the browser]]></title><description><![CDATA[<p>Thanks to HTML5's <a href='http://www.w3.org/TR/webstorage/' >Local Storage</a>, it is now much more feasible to move your entire site's user state to the browser client.</p>

<h3 id="userstate">User state</h3>

<p>What is "user state" anyway? In the case of a typical e-commerce site, this would include the user's shopping cart, and any partially completed forms in the order process. You might also have a customer login status and customer preferences. For Pampered Poultry, I have a cart and an order form.</p>

<p>Since older browsers don't support Local Storage, relying on it for a public web site is not acceptable. So I opted to store the user's cart in browser Cookies, since the cart data is so small and simple. I decided to store any partialy completed forms in Local Storage for users with modern browsers, but fallback to Cookies for users with older browsers.</p>

<h3 id="testingiflocalstorageissupported">Testing if Local Storage is supported</h3>

<p>It's very easy to test if the user's browser supports Local Storage. For example:</p>

<pre><code>// Determine id local storage is available
if (typeof(Storage) !== 'undefined') {
    localStorageEnabled = true;
} else {
    localStorageEnabled = false;
}
</code></pre>

<h3 id="usinglocalstorageandcookies">Using Local Storage and Cookies</h3>

<p>Local Storage and Cookies both use a key/value approach, so sharing the same functionality across these two technologies can be easy. Cookies are more limited in size and performance- for a starting guide, you shouldn't exceed 50 cookies per domain, or more than 4093 bytes per domain. Local Storage will grant you 5M more of data space!</p>

<p>The key/value approach becomes very powerful when you realize you can use a key such as <code>'cart'</code> and a value that is a JSON string of all the user's shopping cart data. That means we only need one simple line of code to write or read the whole cart object to/from Local Storage or Cookies!</p>

<blockquote>
  <p>Note that when you store data in Cookies, that data will be sent back and forth with every server request. This is another reason why Local Storage is such an improvement, as it lives on the browser and is not sent in the HTTP requests.</p>
</blockquote>

<h4 id="toreadandwritelocalstorage">To read and write Local Storage:</h4>

<p>The following code sets the Local Storage key <code>'USER_CART'</code> to a JSON string representing the current cart object data:</p>

<pre><code>localStorage.setItem('USER_CART', JSON.stringify(cart));
</code></pre>

<p>Now you can read this data out of Local Storage and process it back to JavaScript data as follows:</p>

<pre><code>USER_CARTstr = localStorage.getItem('USER_CART');

if (USER_CARTstr) {
    cart = JSON.parse(USER_CARTstr);
} 
</code></pre>

<h4 id="toreadandwritecookies">To read and write Cookies:</h4>

<p>I use the following two JavaScript functions to emulate the Local Storage functions above:</p>

<pre><code>function setCookie(name, value) {
    var cookieStr = name + '=' + value + ';path=/;domain=' + thisSiteDomain;
    document.cookie = cookieStr;
}

function getCookie(name) {
    var nameEQ = name + "=";
    var ca = document.cookie.split(';');
    for(var i=0;i &lt; ca.length;i++) {
        var c = ca[i];
        while (c.charAt(0)==' ') c = c.substring(1,c.length);
        if (c.indexOf(nameEQ) == 0) return c.substring(nameEQ.length,c.length);
    }
    return null;
}
</code></pre>

<blockquote>
  <p>Note: The setCookie function expects a global <code>thisSiteDomain</code> variable to contain your site's <em>domain</em>, since cookies are stored at the domain level. For site-wide cookies, use <code>'.mydomain.com'</code>. Or you can kee your cookies per-subdomain such as <code>'orders.mydomain.com'</code>. <strong>For <em>localhost</em>, such as when testing on your local laptop, use an empty string (<code>''</code>) for the domain.</strong></p>
</blockquote>

<h3 id="jsonconcerns">JSON concerns</h3>

<p>A couple notes about using JSON functions in the browser:</p>

<p><strong>Browser support-</strong> Older versions of IE do not support the JSON JavaScript functions. To work around this issue, just download and reference the <a href='http://bestiejs.github.io/json3/' >JSON 3 library</a>. Use the following code in the <code>&lt;head&gt;</code> section of your HTML:</p>

<pre><code>&lt;!--[if lte IE 8]&gt;
&lt;link href='http://seanvbaker.com/css/ie.css'  rel="stylesheet" /&gt;
&lt;script type="text/javascript" src='http://seanvbaker.com/js/json3.min.js' &gt;&lt;/script&gt;
&lt;![endif]--&gt;
</code></pre>

<p><strong>JSON.parse-</strong> Unless you are 100% certain the string in question is valid JSON, be sure to run the <code>parse</code> function in a try/catch:</p>

<pre><code>try {
    data = JSON.parse(dataStr);
} catch(err) {
    // Handle error here
}
</code></pre>

<p>If <code>JSON.parse</code> tries to process invalid JSON, it will crash your application unless you trap the error. I tend not to worry about it for front end variables that have only been created using a <code>JSON.stringify()</code> command. But I do use it on anything sent to/from the server via Ajax, etc.</p>

<p>Next post I will talk about <em>Content-forward design</em>.</p>

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/moving-user-state-to-the-browser/</link><guid isPermaLink="false">415258b4-63c7-47ed-9803-3fccd8af20a7</guid><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Thu, 30 Jan 2014 18:09:05 GMT</pubDate></item><item><title><![CDATA[The HTML5 Approach]]></title><description><![CDATA[<p>The term "HTML5" might seem to relate most directly to HTML- a site's layout and user interface. But the most dramatic implications of HTML5 relate to web application architecture.</p>

<h3 id="thehtml5webapp">The HTML5 Web App</h3>

<p>An HTML5 web app employs some or all of the following emerging technology standards:</p>

<ul>
<li><strong>HTML5</strong> tags- new browsers support more advanced layout and user interface controls</li>
<li><strong>Style Sheets</strong>- CSS3 allows powerful control and easier programatic access to layout, element positioning, and design styles</li>
<li><strong>Ajax</strong>- Asynchronous JavaScript enables real-time dynamic access to server resources from a static web page</li>
<li><strong>Local Storage</strong>- A simple key value pair data store in the browser makes it possible to maintain user state on the client instead of the server</li>
</ul>

<h3 id="beforehtml5">Before HTML5</h3>

<p>The PHP, ASP, .NET "dynamic web pages" architecture might seem limiting compared to today's highly interactive and responsive web apps. However, there are some major benefits for the programmer:</p>

<ul>
<li>Design follows a natural story board. Somewhat static pages take in user input to be processed when the user is ready.</li>
<li>Story board design approach naturally organizes code. Each "page" (file) becomes analogous to a function.</li>
<li>User state is managed on the server, which is fundamentally more secure and controllable than the client browser</li>
</ul>

<h3 id="anhtml5architecture">An HTML5 architecture</h3>

<p>I took an HTML5 approach for the <a href='http://www.pamperyourpoultry.com/' >Pampered Poultry</a> e-commerce site. The web site runs as a client app, maintaining user state via cookies and/or local storage depending on browser support. All dynamic content (products and news) is loaded via Ajax calls to the Node server. The Node server serves the static files (such as HTML, JavaScript, CSS, and images), implements the Ajax handlers, and interfaces with MySQL for data storage.</p>

<p>Now, let's look at <a href='http://seanvbaker.com/moving-user-state-to-the-browser' >managing state in the browser</a> using cookies and local storage.</p>

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/the-html5-approach/</link><guid isPermaLink="false">971fdcd8-8ded-4ed2-a178-2cd81f56c6f7</guid><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Thu, 30 Jan 2014 17:58:20 GMT</pubDate></item><item><title><![CDATA[Using recursion to tame callback hell]]></title><description><![CDATA[<p>Asynchronous programming with JavaScript can lead to what is disparagingly termed "callback hell."</p>

<p>I feel that Node.js beginners are told far too often that the only way to accomplish asynchronous JavaScript is to use an async library. In my experience, solving relatively simple async problems is the best way to learn Node, and will give you a deeper more intuitive understanding of the environment. (It will also give you the tools you will need to implement more complicated async solutions.)</p>

<h3 id="callbackhell">Callback hell</h3>

<p>"Callback hell" refers to the excessive code nesting created as you attempt to code a series of logical steps that each require a call to an asynchronous (non-blocking) function. For example, the following function processes three asynchronous steps one at a time:</p>

<pre><code>function nestedExample() {
    setTimeout(function() {
        console.log('Do step one');
        setTimeout(function() {
            console.log('Do step two');
                setTimeout(function() {
                    console.log('Do step three');
                    setTimeout(function() {
                        console.log('Finalize process');
                        return;
                    }, 200);
                }, 200);
        }, 200)
    }, 200);
}
</code></pre>

<blockquote>
  <p>Note: Using <code>setTimeout()</code> it is a great way to simulate an asynchronous function when you are learning. Much like a database or http call, it takes a callback function to execute when it completes. Here the delay is set to 200ms.</p>
</blockquote>

<p>Notice the way to code marches to the right, with each successive callback nested inside the previous step. This code can be very hard to read and manage.</p>

<h3 id="arecursiveapproach">A recursive approach</h3>

<p>Before jumping to a full fledge asynchronous solution such as <a href='https://github.com/caolan/async' >async</a>, or <a href='https://github.com/kriskowal/q' >q</a>, consider whether a recursive pattern might work. The above example would look something like this:</p>

<pre><code>function recursiveExample() {

    var step = 'one';
    processData();

    function processData() {

        switch (step) {
            case 'one':
                setTimeout(stepOne(), 200);
                break;

            case 'two':
                setTimeout(stepTwo(), 200);
                break;

            case 'three':
                setTimeout(stepThree(), 200);
                break;

            case 'finalize':
                stepFinalize();
                break;
        }
    }

    function stepOne() {
        console.log('Do step one');
        step = 'two';
        processData();
    }

    function stepTwo() {
        console.log('Do step two');
        step = 'three';
        processData();
    }

    function stepThree() {
        console.log('Do step three');
        step = 'finalize';
        processData();
    }

    function stepFinalize() {
        console.log('Finalize process');
        return;
    }

}
</code></pre>

<p>This approach creates more code, but it is very simple, easy to read, and permits a large degree of flexibility.</p>

<blockquote>
  <p>Note that the <code>step</code> variable is global to all the step functions because of the closure.</p>
</blockquote>

<p>I have used steps named "one", "two", and "three" above, but in practice you can name these to fit your process - such as "validate<em>user", "process</em>order", and "send_email" for example.</p>

<p>You can easily expand this approach to handle some parallel tasks, and branching conditionals in the async process steps. (You can also start with an array of records to process and <code>.pop()</code> each next value to process until the array is empty.)</p>

<p>This is certainly not the best approach for many cases - but it is a great way to become more intimate with asynchronous programing before jumping into an async library. And you might be surprised how manageable your async challenge becomes once you start to become comfortable with some new patterns.</p>

<p>I was able to implement all of the MySQL transaction functionality for my eCommerce sites in a clean readable manner without using an async library.</p>

<p>I think the jump to a formal async library comes when the complexity level increases to the next level, or when you are implementing a larger scope project that requires integrating asynchronous functionality across multiple modules.</p>

<blockquote>
  <p>Read more on how to implement <em>promises</em> <a href='http://promises-aplus.github.io/promises-spec/' >here</a>.</p>
</blockquote>

<p>Next post I will talk about <em>Moving user state to the browser</em>.</p>

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/using-recursion-to-tame-callback-hell/</link><guid isPermaLink="false">6a399b2d-834b-41e8-9dbe-91d23df77410</guid><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Wed, 29 Jan 2014 01:27:24 GMT</pubDate></item><item><title><![CDATA[Why async?]]></title><description><![CDATA[<p>Node's asynchronous (non-blocking) model is the most challenging hurdle to learning Node, but is also the fundamental reason Node is so compelling. This new "parallel programming" technique is a consequence of Node's inherently "event-driven" approach which naturally fosters efficient and scalable solutions for I/O intensive server applications.</p>

<h3 id="proceduralprogramming">Procedural programming</h3>

<p>If you have done any front-end browser programming with the browser DOM and AJAX, you are already familiar with an event-driven environment and some asynchronous programming. Since the front end tends to respond to user events, the event-driven approach is easier to understand in this context.</p>

<p>But on the back end, the instinct to build services in a procedural manner is hard to change. The procedural approach closely mimics our logical approach to solving problems. We see the solution as in a flow diagram, with conditional branches, process loops, and an eventual output. Everything happens one step after another. This is great for the programmer, but creates a challenge for the server. Each request to the server essentially fires up a dedicated program to process the request. The process may do many things - query a database, request data from another server, write a file, etc. The server must keep this ball in the air while still serving other requests. To keep things moving, the server manages many threads at the same time - even when they are just sitting there waiting for a response from the database, file system, or external server.</p>

<h3 id="asynchronousprogramming">Asynchronous programming</h3>

<p>Node.js does not directly support the procedural approach. Instead, it leverages the asynchronous capability of JavaScript and puts the burden (and the control) in the hands of the programmer.</p>

<blockquote>
  <p>In Node, you still implement a procedural process, but you must design an asynchronous coding solution for it. As a byproduct of this design process, you will craft a fundamentally more efficient program.</p>
</blockquote>

<p>In Node, whenever you reach a step that might take some time (like querying a database, writing a file, or making an http request to another server), you call an asynchronous function. The asynchronous function is written in such a way that it returns control back to your process flow right away, but continues to execute on the server. This asynchronous function most always takes a "callback" as an argument, which is a function you pass to it to be executed when it completes. Because of the "closure", the callback function has access to the caller's variables and keeps their state on the server until the asynchronous function completes.</p>

<p>If you don't already understand closure in JavaScript - here is a nice quick <a href='http://www.javascriptkit.com/javatutors/closures.shtml' >explanation</a>.</p>

<p>On a busy Node server, there may be many simultaneous requests active on the server at any one time, which seems to go against the notion you get when you hear that Node is single-threaded. The answer to this apparent contradiction comes from the way JavaScript manages function scope using closure. Even though there are multiple independent client requests active at one time, each function called to handle a request maintains its own set of variables until it completes. </p>

<p>Here is a helpful overview of JavaScript functions and scope: <a href='https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions_and_function_scope' >Functions and function scope</a></p>

<h3 id="greatforiointensivework">Great for I/O intensive work</h3>

<p>I like to think of Node as a "conductor", orchestrating requests and the various I/O tasks required to fulfill the request. An I/O task might be querying a database, reading a file, or making an http request to an API. Also, Node.js supports delegating a CPU-intensive task to a <a href='http://www.andygup.net/node-js-moving-intensive-tasks-to-a-child-process/' >"child process"</a> which gets its own thread.</p>

<h3 id="callbackhell">Callback hell</h3>

<p>This is a disparaging term that refers to the syntactical challenge that can crop up when you try to implement a long branching procedural process flow that requires many nested asynchronous function calls. There are a couple ways to mitigate this scenario:</p>

<ul>
<li>There are special libraries and approaches to support complicated logical flows in an asynchronous environment. Popular solutions include <a href='http://howtonode.org/promises' >promises</a>, <a href='https://github.com/caolan/async' >async</a>, and <a href='http://book.mixu.net/node/ch7.html' >other approaches</a></li>
<li>Define functions outside your logic flow code block instead of putting them in-line. With some clever design and function naming, this can yield surprisingly impressive results</li>
<li>Using recursion can solve many asynchronous list-looping patterns with a small amount of nice readable code</li>
<li>Just accept some level of function nesting - it is not as bad as it seems once you accept it as part of the language. I find I can nest up to three async calls without confusion, and that I can code a majority of my solutions without needing to go further.</li>
</ul>

<p>Next post I give an example of <a href='http://seanvbaker.com/using-recursion-to-tame-callback-hell' >using recursion to tame callback hell</a>.</p>

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/why-async/</link><guid isPermaLink="false">bc302273-1af6-4f0f-bd23-311fb712413c</guid><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Thu, 16 Jan 2014 21:37:25 GMT</pubDate></item><item><title><![CDATA[Understanding Node.js]]></title><description><![CDATA[<p><em>This introduction to Node.js is for those who have experience creating web sites and applications using server-based solutions such as PHP, ASP, or .NET, etc.</em></p>

<h3 id="movingfromiisandapache">Moving from IIS and Apache</h3>

<p>As someone who has implemented many server-based web solutions, from e-commerce to integrated corporate financial systems, I hope to extol the virtues of Node.js from a familiar vantage point.</p>

<h3 id="whatnodejsis">What Node.js is</h3>

<p><strong>Node.js</strong> is an application that implements and manages a JavaScript run-time environment using <a href='http://en.wikipedia.org/wiki/V8_' (JavaScript_engine">Google's V8 engine</a>, along with a library of built-in core functionality that implements much of the standard network and OS plumbing required to build web-centric applications and servers.</p>

<p>Node also includes the <a href='https://npmjs.org/' >npm</a> package manager that facilitates the use of community created functionality (in the form of <a href='http://book.mixu.net/node/ch8.html' >modules</a>).</p>

<h3 id="whatnodejsdoes">What Node.js does</h3>

<p>Node.js can do many things, but this blog focuses on using Node where we might have used IIS and Apache in the past. And as you will see, the flexibility of creating your own web servers will open the door to many intriguing new approaches.</p>

<h3 id="thenodejsway">The Node.js way</h3>

<p>One of the most powerful aspects of Node.js is its minimalism. Even though there are thousands of modules available, the community generally fosters a "use only what you need" approach.</p>

<blockquote>
  <p>If I had to sum up any development wisdom I might have acquired over the years, I would say: <em>"Less is more"</em> - the fewer layers, functions, and lines of code, the more manageable, scalable, and malleable the final solution will be.</p>
</blockquote>

<p>Of course you need layers, functions, and other organizing constructs to succeed. The trick is to not introduce any superfluous complexity, and always understand the purpose and use of each unit of abstraction (module, function, or framework.)</p>

<p>For those just learning Node.js, I strongly recommend starting with as few modules as possible. I often see folks start with a framework and several modules that claim to simplify things for you. While these modules often do have great value for their intended use, beginners will end up learning the framework rather than the fundamental approaches critical to succeeding with Node.js and asynchronous server programming.</p>

<h3 id="javascript">JavaScript</h3>

<p>One of Node's big selling points is JavaScript. Benefits of using JavaScript on the server side include:</p>

<ul>
<li>Uses same language as the browser client</li>
<li>Supports asynchronous and object paradigms</li>
<li>Plays well with JSON</li>
<li>Many developers already know and enjoy JavaScript</li>
</ul>

<h3 id="afewusecasesfornodejs">A few use cases for Node.js</h3>

<p>Here are just a few things you might use Node.js for that would be more challenging and/or less scalable with a traditional IIS/Apache server approach:</p>

<ul>
<li>A scalable RESTful API server that does not need to manage client sessions or serve static file content (e.g., a back end service to an HTML5 app which manages its own state already)</li>
<li>A transaction server that brokers transactions between numerous clients (e.g., a massive multiplayer HTML5 or native app game server)</li>
<li>A centralized state management server that allows numerous clients to update and access real-time information (e.g., a low-latency in-memory data server)</li>
<li>A server that fulfills requests by consuming and aggregating other web services (e.g., a mashup service)</li>
</ul>

<p>Felix's <a href='http://nodeguide.com/convincing_the_boss.html' >Convincing the boss</a> article as some great use case examples.</p>

<p>Next post I answer the question, <em><a href='http://seanvbaker.com/why-async' >Why async?</a></em></p>

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/understanding-node-js/</link><guid isPermaLink="false">8aeca5fc-8f22-4148-8342-ecd5f7f5a6da</guid><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Sat, 11 Jan 2014 17:39:47 GMT</pubDate></item><item><title><![CDATA[Using Git to deploy Node.js sites to Ubuntu]]></title><description><![CDATA[<p>I was introduced to version control on my first corporate assignment. It was a Microsoft shop, and the company's e-commerce codebase was managed more like a software product than a living web site. Enhancements and fixes were developed and deployed to an internal staging environment for full regression testing before being certified for deployment into production. I remember waiting a week for a typo fix to roll live.</p>

<p>We used Microsoft's Visual Source Safe, and I quickly found it to be very powerful and deceptively simple to use. (I have spent far too many hours in my career trying to explain how VSS works to developers new to it.)</p>

<p>For personal projects and small teams I continued to use VSS for version control, but more importantly, as a project organization and production deployment tool. Even in situations that required less formal testing, versions were carefully "labeled" and deployed to production. This helped the team share development code, kept to source code reliable, and most importantly, helped me keep my sanity ;)</p>

<blockquote>
  <p>Funny how times change, but many of the same magic steps seem to persist, only with different approaches and lingo. Where we had the ceremonious "pulling of the label" to production with VSS, today I "push the master branch" to production using Git.</p>
</blockquote>

<h3 id="usinggitfordeployment">Using Git for deployment</h3>

<p>The article explains one way to use Git to deploy Node.js sites to an Ubuntu Linux server. I do not get into the version control and development management aspects of Git. I also assume you are developing on a Mac, but most of the development side should work with Git on Windows as well.</p>

<p>This approach keeps a remote Git repository on the production server that you will push your updates to. A Git "hook" on the server will automatically execute whenever you push a code update to the server. The hook will stop the Node app service, deploy the updated files from the Git repository to the the actual Node app directory, and then restart the Node app again.</p>

<p><strong>Prerequisites:</strong></p>

<ul>
<li>You need to have Git installed on your development machine: <a href='http://git-scm.com/download/mac' >git-scm.com</a> or <a href='http://brew.sh/' >Homebrew</a>, etc.</li>
<li>Configure your Ubuntu server as I describe in my <a href='http://seanvbaker.com/setting-up-a-node-website/' >Setting Up a Node.js Website</a> post. This will:
<ul><li>Create a <em>git</em> user on the server who owns the app files</li>
<li>Create an Upstart service for your Node app</li></ul></li>
</ul>

<h4 id="setupgitfordevelopment">Setup Git for development</h4>

<p>Go to the directory of your project on your development machine <code>cd ~/mysite</code> and run <code>git init</code> to create a local Git repository for the site.</p>

<p>You may have some files in your project that you do not want to track with Git or deploy to production. To have Git ignore these files, use a <code>.gitignore</code> file to filter out all files except those you want to manage and deploy to production. (Such files might include log files, environment settings files, passwords, keys, etc.)</p>

<blockquote>
  <p>Note: Git only tracks files, not directories. So if you want to include an empty directory as part of your project deployment, be sure to create a readme.md file or something so the directory has at least one file. (For example, I have a /log directory that needs to be created as part of the app.)</p>
</blockquote>

<p>To create a .gitignore file for your project, go to your <code>~/mysite</code> project root directory on your development machine, and create a new file called <code>.gitignore</code> and use something like this:</p>

<pre><code>env.json
*.log
public/images/uploads/*.*
!public/images/uploads/readme.md
ssl/
.DS_Store
</code></pre>

<p>This will instruct Git to:</p>

<ul>
<li>Ignore any env.json files</li>
<li>Ignore any files with a .log extension</li>
<li>Ignore all files in public/images/uploads/</li>
<li>But DO include public/images/uploads/readme.md
<ul><li>This will pull in the <em>uploads</em> directory</li></ul></li>
<li>Ignore any files in ssl/ (where I keep my keys)</li>
<li>Ignore those pesky Mac .DS_Store files</li>
</ul>

<p>You can test your .gitignore file by running this in your <em>~/mysite</em> directory:</p>

<pre><code>git add .
git commit -m "init commit"
git ls-tree --full-tree -r HEAD
</code></pre>

<p>This will add all the trackable files and the last command show you the files that were added and are now tracked by Git. You should only see the files you expect. If not, edit your <em>.gitignore</em> file until it works as required for your needs. To "un-track" a file in Git that was added by accident, use <code>git rm --cached &lt;file&gt;</code> for each <file> you need to remove.</p>

<h4 id="installgitonyourserver">Install Git on your server</h4>

<p>ssh into your VM, then install Git:</p>

<pre><code>apt-get update
apt-get install git
</code></pre>

<h4 id="setupyournodesiteforgit">Setup your Node site for Git</h4>

<p>I assume your Node site is set up as I explain <a href='http://seanvbaker.com/setting-up-a-node-website/' >here</a>. The <em>git</em> user owns to app files, which are located at <code>/home/git/myite</code>.</p>

<p>Log in to your server as the <code>git</code> user so we create the git repository with permissions that the git user will be able to use.</p>

<pre><code>su git
</code></pre>

<p>First, add a bare git repository on the server for your site:</p>

<pre><code>cd /home/git
mkdir mysite.git
cd mysite.git
git --bare init
</code></pre>

<p>Next, create the <em>post-receive</em> Git hook file that will deploy updates to the <em>/home/git/mysite</em> directory:</p>

<pre><code>cd /home/git/mysite.git/hooks
cat &gt; post-receive
</code></pre>

<p>Then paste in this file content modified for your node app path if needed:</p>

<pre><code>#!/bin/sh
GIT_WORK_TREE=/home/git/mysite git checkout -f
</code></pre>

<p>(<code>ctrl-d</code> to exit the cat process)</p>

<p>Change permissions on the file to allow it to be executed:</p>

<pre><code>chmod +x post-receive
</code></pre>

<p>Now go back to your local development mac and add the remote git repository you just created:</p>

<pre><code>cd ~/mysite
git remote add mysite_label git@yourproductionserver.com:mysite.git
</code></pre>

<p>Git is going to use <code>mysite_label</code> as a name for the repository. You will be typing <code>mysite_label</code> a lot, so use something short that still describes your site ;)</p>

<p>Test the deployment process on your mac. (Add your site files to Git and make an initial commit if you did not do so when you first set up the local git repo for this project: <code>git add .</code> and <code>git commit -m "init commit"</code>).</p>

<pre><code>cd ~/mysite
git push mysite_label master
</code></pre>

<p>The <code>git push</code> command will prompt you for the password for the git user you created on the production server (unless you're using an ssh key to connect.)</p>

<p>This should push your committed code changes to the server, and the git hook on the server should automatically deploy the updated files to the node app directory. ssh to the production server and verify that your code updates have now indeed been deployed to <em>/home/git/mysite</em>.</p>

<p><strong>Troubleshooting:</strong> Be sure you can ssh to your server as <em>git</em>: <code>ssh git@yourproductionserver.com</code>. Once you log in as git user, you should be in the <em>/home/git</em> directory and your <em>mysite.git</em> repository should be right in that directory.</p>

<p>There is a limitation to our deployment process at this point. If we make updates to the node app or its code dependancies, the node app needs to be restarted in order for the updates to take effect. We will address this next when we enable <em>git</em> user to start and stop the node service, and update the Git post-receive hook to start and stop the service.</p>

<h4 id="enablenodeapprestartondeployment">Enable Node app restart on deployment</h4>

<p>For this step, you need to have your node app set up as an Upstart service as explained <a href='http://seanvbaker.com/setting-up-a-node-website/' >here</a>.</p>

<p>We can leverage the node service commands to enhance our Git deployment process to restart the Node app.</p>

<p>First we need to change the <em>sudoers</em> file to allow the <em>git</em> user to sudo as root to start and stop your blog service. (Only root-access users can stop and start Upstart services.) We also need to prevent the sudo process from prompting for a password when using sudo for this task, otherwise the automated git hook script will fail due to the prompting.</p>

<p>Add one line to the end of the sudoers file. I like to use vi, but you can use the editor of your choice. To use vi: <code>export EDITOR="vi"</code>.</p>

<blockquote>
  <p>Here is an overview and command reference for using vi to edit files: <a href='http://www.cs.colostate.edu/helpdocs/vi.html' >Basic vi Commands</a>.</p>
</blockquote>

<p>Now to edit the sudoers file:</p>

<pre><code>visudo
</code></pre>

<p>Visudo is a special vi editor that parses the file before allowing you to save changes, since mistakes in the sudoers file can make your server impossible to access.</p>

<p>It is important to add the following line to the <strong><em>end</em></strong> of the file:</p>

<pre><code>git ALL = (root) NOPASSWD: /sbin/stop mysite, /sbin/start mysite
</code></pre>

<p>This added line will allow <em>git</em> user to sudo as root with no password prompt, but only when running the start and stop commands for your service. Note: <code>mysite</code> refers the service name you created for your Node app- the same name of your Upstart <em>.conf</em> file. Also note that you need to use the full path for the start/stop commands. To double check your path, use <code>which start</code> to see the full path.</p>

<p><strong>Note</strong>: You need to logout and log into the server again for the sudoers file change to take effect in your ssh session.</p>

<p>To test, ssh as <code>git@yourserver.com</code>.</p>

<p>You should be able to stop/start the service as <em>git</em> user as follows:</p>

<pre><code>sudo /sbin/stop mysite
sudo /sbin/start mysite
</code></pre>

<p>Now update the git <em>post-receive</em> hook to stop and start the service:</p>

<pre><code>cd /home/git/mysite.git/hooks
cat &gt; post-receive
</code></pre>

<p>Then paste this file contents in, modified for your site if need be:</p>

<pre><code>#!/bin/sh
sudo /sbin/stop mysite
GIT_WORK_TREE=/home/git/mysite git checkout -f
sudo /sbin/start mysite
</code></pre>

<p>(<code>ctrl-d</code> to finish)</p>

<p>Now when you deploy updates via Git, your mysite Node service will restart as well. Make a small change to a file in your Node site on your Mac, commit the change, and deploy:</p>

<pre><code>cd ~/mysite
git add .
git commit -m "test deployment process"
git push mysite_label master
</code></pre>

<p>You should see the new process id # returned to confirm the restart of the service - something like this:</p>

<pre><code>git push mysite_label master

git@yoursite.com password: xxxxx

Counting objects: 11, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 499 bytes | 0 bytes/s, done.
Total 6 (delta 2), reused 0 (delta 0)
remote: mysite_label stop/waiting
remote: mysite_label start/running, process 3469
To git@yoursite.com:mysite.git
   60f6037..ece0ca8  master -&gt; master
</code></pre>

<p>Seeing the process # gives us nice comfort that the service has indeed been restarted.</p>

<h3 id="conclusion">Conclusion</h3>

<p>Now you can develop your Node.js app, stage and commit the changes via Git, and then deploy the updates to production with just one command :)</p>

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Next post I talk about <a href='http://seanvbaker.com/understanding-node-js/' >Understanding Node.js</a>.</p>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/using-git-to-deploy-node-js-sites-on-ubuntu/</link><guid isPermaLink="false">95256855-38ef-4a21-acea-9bec9cf95e63</guid><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Wed, 18 Dec 2013 14:04:30 GMT</pubDate></item><item><title><![CDATA[Setting Up a Node.js Website]]></title><description><![CDATA[<p>What if you just want to get started hosting a basic web site using Node? Node.js makes for a fine basic web server that is easy to set up and offers you the ability to get your feet wet with Node.js while giving you a place to start expanding functionality using Node.</p>

<p>There are a few common challenges to overcome first such as:</p>

<ul>
<li>Installing Node.js</li>
<li>Using Node as a static file web server</li>
<li>Running a Node app as a service so it starts on system boot and restarts after any potential crashes</li>
<li>Hosting more than one Node app site on the same server</li>
</ul>

<blockquote>
  <p>This article assumes you are using the Ubuntu Linux platform such as I discuss in <a href='http://seanvbaker.com/starting-linux-for-windows-developers/' >Starting Linux for Windows developers</a>.</p>
</blockquote>

<h3 id="iinstallnodejs">I. Install Node.js</h3>

<p><strong>ssh</strong> into your server as a user that has root sudo privileges. So using our example from before:</p>

<pre><code>ssh myadmin@162.243.238.83 -p 50231
</code></pre>

<p>Now run these commands one at a time to install the latest version of Node.js:</p>

<pre><code>sudo apt-get update
sudo apt-get install python-software-properties python g++ make
sudo apt-get install software-properties-common
sudo add-apt-repository ppa:chris-lea/node.js
sudo apt-get update
sudo apt-get install nodejs
</code></pre>

<p>Notice that the first command is <code>apt-get update</code>. It is always a good idea to make sure your Ubuntu package manager has the latest package settings before starting any install.</p>

<p>To verify Node has been installed:</p>

<pre><code>which node
node -v
</code></pre>

<p>This will tell you where Node is running from, and which version you are running.</p>

<h3 id="iisetupanodejsstaticfileserver">II. Set up a Node.js static file server</h3>

<p>In preparation for setting up a Git deployment process later, let's create a new Linux user called <code>git</code> who will own the web site files:</p>

<pre><code>sudo adduser git
</code></pre>

<p>Follow the prompts and set the user's name to something like "Git user" - you can leave the rest of the settings blank. Make sure you make a note somewhere of the password you use for this new user!</p>

<p>Now change to this new <em>git</em> user using: <code>su git</code>. Use the password you just created above.</p>

<p>Create a new directory where the node server app and web site files will be located:</p>

<pre><code>cd /home/git
mkdir mysite
cd mysite
mkdir public
</code></pre>

<blockquote>
  <p>Note that is is fairly standard to use <strong><em>public</em></strong> as the web root directory in node web sites - much as you're used to seeing <strong><em>wwwroot</em></strong> in IIS.</p>
</blockquote>

<p>We're going to use <strong><em>connect</em></strong> to create our web server. <a href='https://github.com/senchalabs/connect' >Connect</a> is a Node.js module that acts as a router for incoming requests to the server and offers a number of very popular "middleware" modules to handle many standard functions.</p>

<blockquote>
  <p>The most popular Node.js server framework module is probably <a href='https://github.com/visionmedia/express' >Express</a>, which is built on <em>connect</em>. I opt to use <em>connect</em> without Express because I don't leverage the templating and server state management functionality of <em>Express</em>. If you use Express, there are just some slight syntax changes to what you will see here when using stand-alone <em>connect</em>.</p>
</blockquote>

<p>To install <em>connect</em> for this web site, make sure you are in the <code>/home/git/mysite</code> directory and use the Node.js NPM package manager to install <em>connect</em> as follows:</p>

<pre><code>npm install connect
</code></pre>

<p>This should load the <em>connect</em> files and finish showing you the structure something like this:</p>

<pre><code>connect@2.12.0 node_modules/connect
├── uid2@0.0.3
├── methods@0.1.0
├── cookie-signature@1.0.1
├── pause@0.0.1
├── fresh@0.2.0
├── qs@0.6.6
├── debug@0.7.4
├── bytes@0.2.1
├── buffer-crc32@0.2.1
├── raw-body@1.1.2
├── batch@0.5.0
├── cookie@0.1.0
├── negotiator@0.3.0
├── send@0.1.4 (range-parser@0.0.4, mime@1.2.11)
└── multiparty@2.2.0 (stream-counter@0.2.0, readable-stream@1.1.9)
</code></pre>

<p>Now you have a <code>node_modules</code> directory under your <code>mysite</code> directory. This is the standard directory other node modules will go in as well as you install them with NPM. You can also put your custom modules there.</p>

<p>Now create the actual node server app. Make sure you are still in <code>/home/git/mysite</code> and run the command <code>cat &gt; server.js</code> and paste the following code in:</p>

<pre><code>var http = require("http");
var connect = require('connect');

console.log('\n\n--- Node Version: ' + process.version + ' ---');

// Set up Connect routing
var app = connect()
    .use(connect.static(__dirname + '/public'))
    .use(function(req, res) {
        console.log('Could not find handler for: ' + req.url);
        res.end('Could not find handler for: ' + req.url);
    })
    .use(function(err, req, res, next) {
        console.log('Error trapped by Connect: ' + err.message + ' : ' + err.stack);
        res.end('Error trapped by Connect: ' + err.message);
    });

// Start node server listening on specified port -----
http.createServer(app).listen(80);

console.log('HTTP server listening on port 80');
</code></pre>

<p><em>Remember to use <code>ctrl-d</code> to finish the cat file creation process.</em></p>

<p>Now create the good old fashioned "Hello World" HTML file in the public folder: <code>cd public</code> and <code>cat &gt; index.html</code>, then paste in the html:</p>

<pre><code>&lt;html&gt;
    &lt;head&gt;
        &lt;title&gt;Mysite&lt;/title&gt;
    &lt;/head&gt;
    &lt;body&gt;
       Hello World!
    &lt;/body&gt;
&lt;/html&gt;
</code></pre>

<p>Now to test your site. At this point, you can only run the node app when you have root privileges, because it is using port 80, which is a privileged port. We will fix this later - but for now, change to root using: <code>su -</code>. (Or you can use your administrative account by putting <code>sudo</code> in front of the node command.)</p>

<p>Now <code>cd /home/git/myite</code> and use this command to start the node server app:</p>

<pre><code>node server.js
</code></pre>

<p>If all went well, the server will be running and outputting any standard output (when you use <code>console.log()</code> in the app) to the terminal. Now use your web browser to see if it worked. Go to <code>http://your.ip.address</code> and you should see the "Hello World!" text. If you get any errors, you will see them in the server terminal window.</p>

<blockquote>
  <p>Notice that connect's <em>static</em> middleware defaults to use index.html as a default file of no resource was specified. You may also see <code>Could not find handler for: /favicon.ico</code> logged to the screen - that's ok for now - your browser was just looking for the favicon file which you do not have yet.</p>
</blockquote>

<p>To stop the server, use <code>ctrl-c</code> in the terminal.</p>

<h3 id="iiirunyournodeappasaservice">III. Run your Node app as a service</h3>

<p>Ubuntu has a nice process for creating system services called <strong><a href='http://upstart.ubuntu.com/' >Upstart</a></strong>. You create a simple script to define your service which will allow you to:</p>

<ul>
<li>Start the Node app whenever the server restarts</li>
<li>Restart the app if it should fail for some reason (a crash from a bug, memory leak, etc.)</li>
<li>Conveniently start/stop the app via system commands</li>
</ul>

<p>To create the script you must be root user. Change to root and create the Upstart script as follows. The name of your .conf file will become the name of your new service:</p>

<pre><code>su -
cd /etc/init
cat &gt; mysite.conf
</code></pre>

<p>Now paste in the script:</p>

<pre><code>description "Mysite Node Service"
author      "Your info ifya want"
start on started mountall
stop on shutdown

respawn
respawn limit 99 5

script
    sudo node /home/git/mysite/server.js &gt;&gt; /var/log/mysite.log  2&gt;&amp;1
end script

post-start script

end script
</code></pre>

<p>This service script tells the server to execute your node app, and log the stdout and stderr from the app to the log file you specified. Once started, it will keep running now. Note: if you have a major error in the app that causes a crash on startup, this script will start it over and over again 99 times (or keep trying to start it for 5 seconds, whichever comes first) before giving up.</p>

<p>(Note the use of <code>sudo</code> to execute the node server. This will help later when we grant the git user sudo permissions to start/stop the app.)</p>

<p>Now you can start your node website service using:</p>

<pre><code>start mysite
</code></pre>

<p>It should start and tell you the process id, something like:</p>

<pre><code>mysite start/running, process 7412
</code></pre>

<p>Now test the site with your browser - it should be working. Now at this point, if you reboot the server, the mysite service will start automatically! To stop or restart your service, use <code>stop mysite</code> and <code>restart mysite</code>. Nice!</p>

<h3 id="ivusingnginxtohostmultiplenodesites">IV. Using Nginx to host multiple node sites</h3>

<p>At this point you can put any html, css, JavaScript, and image files in your <em>public</em> directory and host them using your node server. But what if you want to run several node web sites on the same machine? Assuming you have domain names for each site and can point their DNS records to your server's IP address, you can use the following solution.</p>

<p>One common approach is to use <a href='http://nginx.com/products/' >Nginx</a> as a reverse proxy. In this scenario, Nginx receives all public requests for your websites and routes the requests to your Node apps based on the requested host and domain name, and then sends your Node app's responses back to the client.</p>

<p>Nginx is very fast, and easy to configure - making it a great solution for this purpose.</p>

<p><strong>Install Nginx-</strong> First make sure your Node app is not running to avoid any conflicts with port 80: <code>sudo stop mysite</code>. <em>Remember that you have to be root or in the sudo group to stop your service at this point.</em></p>

<p>Now install Nginx:</p>

<pre><code>sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
</code></pre>

<p>Test Nginx is running by pointing your browser to your server: <code>http://your.ip.address</code>. You should see the Nginx splash screen at this point.</p>

<p><strong>Update your Node app port-</strong> For this approach, each of your Node apps will run on a different port, and Nginx will use that port to find them. So update your <em>server.js</em> file to change the port by editing the last few lines to look like this to move the app to port 8000:</p>

<pre><code>// Start node server listening on specified port -----
http.createServer(app).listen(8000);

console.log('HTTP server listening on port 8000');
</code></pre>

<p><strong>Configure Nginx-</strong> For each of the Node sites you want to host, create an Nginx .conf file. You can use any name for the .conf file, but matching your site name will help avoid confusion. Use root user for this:</p>

<pre><code>su -
cd /etc/nginx/conf.d
cat &gt; mysite.conf
</code></pre>

<p>Now paste in this file content, modified for your setup:</p>

<pre><code>server {
    listen 80;

    server_name mysite.com www.mysite.com;

    location / {
        proxy_pass http://localhost:8000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
</code></pre>

<p>The important settings are the <code>server_name</code> and <code>proxy_pass</code> lines.</p>

<p>This file tells Nginx to listen for any requests on port 80 for <em>mysite.com</em> or <em>www.mysite.com</em> and route them to <code>http://localhost:8000</code> which is the port our Node app is listening on. (You will assign different unique port numbers for each subsequent Node app you want to host.)</p>

<p>Note: You can also combine all your sites into one .conf file - just put one <em>server {}</em> block after another. Also, you can use <em>*.yourdomain.com</em> to grab <em>any</em> host names and route them. I did have one issue where I needed to specifically add <code>mydomain.com *.mydomain.com</code> for it to route requests to mydomain.com with no host prefix.</p>

<p>To test the Nginx reverse proxy process:</p>

<ul>
<li>Restart Nginx to enable the configuration: <code>service nginx restart</code> (or use <code>nginx -s reload</code>)</li>
<li><code>node /home/git/mysite/server.js</code> to start the Node app</li>
<li>Point your web browser to yoursite.com and verify you see your site now</li>
</ul>

<p>If you map more than a few domain names, the proxy may fail and you will get something like the following error in the Nginx error log:</p>

<pre><code>could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
</code></pre>

<p>To increase the <code>server_names_hash_bucket_size</code>, edit the <em>nginx.conf</em> file using vi:</p>

<pre><code>vi /etc/nginx/nginx.conf
</code></pre>

<p>The following line may be commented out with a <code>#</code> at the front of the line. Delete the <code>#</code> to re-activate the line:</p>

<pre><code>server_names_hash_bucket_size 64;
</code></pre>

<p>Restart Nginx. If 64 does not fix the problem at first, increase it by a powers of 2, until it works (64, 128, 256, etc.)</p>

<p><strong>Troubleshooting:</strong> To better see what is going on, you can monitor the Nginx error or access logs while you try to reach the server:</p>

<pre><code>tail /var/log/nginx/access.log -f

tail /var/log/nginx/error.log -f
</code></pre>

<p><code>tail</code> will show you the last few lines of the log, and the <code>-f</code> will let you monitor any new log entries as they come in. Use <code>ctrl-c</code> to exit.</p>

<p>In the next article we'll set up an efficient Git deployment process so you can easily develop on your laptop and push full site deployments to your hosted server with one command.</p>

<hr />

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Next post I talk about <a href='http://seanvbaker.com/using-git-to-deploy-node-js-sites-on-ubuntu/' >Using Git for deployment</a>.</p>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/setting-up-a-node-website/</link><guid isPermaLink="false">9693b7ad-2aa4-4bd8-902e-b212172b2d84</guid><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Sun, 15 Dec 2013 20:23:06 GMT</pubDate></item><item><title><![CDATA[Starting Linux for Windows developers]]></title><description><![CDATA[<p>I kept one foot in the Windows world as long as I could. It's impressive how many of the new technologies such as Node.js can run on Windows. Perhaps you have other Windows legacy systems, or you're limited to an existing supported Windows environment. I used to have both of these considerations.</p>

<p>As you start to work more with Node.js and many other emerging web technologies, you will begin to realize how much better and easier they work together when implemented in a Linux environment. Also, a majority of the helpful documentation and tutorials you find will assume you're doing things "the Linux way."</p>

<p>You don't need to be a Linux administrator to securely and reliably develop and host your own Linux solutions. Let's go through the process of provisioning a new Linux Virtual Machine (VM), accessing the server, setting up basic security and users, and get up to speed copying files to/from the server.</p>

<blockquote>
  <p><strong>Development-</strong> For this example, I assume you develop on a Mac. You can work from Windows using a tool like <a href='http://www.chiark.greenend.org.uk/' ~sgtatham/putty/">PuTTY</a>, but moving to Apple is nice because 1) the Unix-based OS helps your Linux skills, 2) most Linux tools and libraries have Mac versions, 3) the Mac terminal is nicer to use, and 4) <a href='http://www.sublimetext.com/3' >Sublime Text 3</a> supports retina displays ;)</p>
  
  <p><em>(Plus, it's the perfect excuse to get a shiny new MacBook Pro!)</em></p>
</blockquote>

<h3 id="creatingalinuxvirtualserver">Creating a Linux Virtual Server</h3>

<p>Sign up for an account at <a href='https://www.digitalocean.com/' >DigitalOcean</a> and create a new Linux "droplet." I'm using <strong>Ubuntu 13.10 x64</strong> for this example. <em>(If you plan to plan to network multiple VM's together via a private connection, use NYC2 for a location and enable <a href='https://www.digitalocean.com/community/articles/how-to-set-up-and-use-digitalocean-private-networking' >Private Networking</a>.)</em></p>

<blockquote>
  <p>Why DigitOcean? I agree with <a href='http://www.jeedo.net/a-year-later-with-digitalocean/' >Jeedo's sentiments</a>, as I also express <a href='http://seanvbaker.com/a-node-js-web-app-platform-recipe/' >here</a>.</p>
</blockquote>

<ul>
<li><strong>Note:</strong> Once your droplet is created, DigitalOcean will email you your new server connection info.</li>
</ul>

<h3 id="accessingandupdatingtheserver">Accessing and updating the server</h3>

<p>Setting up your server to authenticate using SSH keys is considered more secure, but I find it adds confusion when you're trying to learn and creating/destroying a bunch of VM's. So just <code>ssh</code> into the server as a user. You will, however, want to lock down the server to improve security and improve manageability and reliability.</p>

<blockquote>
  <p>A good reason do these lock down steps when you first get started is because it slightly changes the process of accessing your server and configuring your deployment process.</p>
</blockquote>

<p><strong>Terminal and basic commands-</strong> The Mac terminal has a lot of pretty themes. Find <em>terminal</em> in your Applications/Utilities folder and drag it to your Dock for easy access. Some quick tips:</p>

<ul>
<li>Open new "tabs" (Shell -> New Tab) to run multiple sessions at the same time. Standardize on using certain themes for certain servers or functions... for example, I always use <em>Homebrew</em> theme for sessions on my local Mac,  <em>Novel</em> for remote VM sessions, and <em>Grass</em> for MySQL sessions. This can go a long way to help keep you sane.</li>
<li>The <code>up arrow</code> will bring back your last command. Press again and again to get prior commands. Nice.</li>
<li>When referencing a file name in a command, type the first few letters and then <code>tab</code> to autofill the rest. If there are multiple matched, (it fills as much as it can) type another letter or so, and <code>tab</code> again to fill the rest.</li>
<li>Your "home" directory is represented by the <code>~</code>, so <code>cd ~/</code> will take you to your home folder. So it is quite helpful to keeop your main project directories right off of your home folder.</li>
</ul>

<p>The bread and butter:</p>

<ul>
<li><strong><code>cd</code></strong> will change directory, as in <code>cd ~/myproject</code>. For Windows users it takes a little time to used to back slashes instead of forward slashes. Up one level is <code>cd ../</code></li>
<li><strong><code>pwd</code></strong> will <strong>P</strong>resent your current <strong>W</strong>orking <strong>D</strong>irectory - very handy.</li>
<li><strong><code>mkdir newdirectoryname</code></strong> will create a new directory at your current location</li>
<li><strong><code>ls</code></strong> will list the directory files. <code>ls -a</code> will show <em>all</em> including hidden files that start with a <code>.</code>. <code>ls -l</code> shows files in a list - along with permissions and file size.</li>
<li>Directories can have one or more <code>.</code> in them. If a file or directory starts with a <code>.</code> it will be hidden. Use <code>ls -a</code> to see them.</li>
<li><strong><code>cat filename</code></strong> will write out the contents of <em>filename</em>.</li>
<li><strong><code>cat &gt; filename</code></strong> will sit there and wait for you to type or paste text in what you want to write into <em>filename</em>. Use <code>cat &gt;&gt; filename</code> to append the text. The single <code>&gt;</code> will overwrite the file with your new content. Use <code>ctrl-d</code> when you are done typing or pasting the text to finish. Using copy and paste, this becomes a very fast way to move small files contents around.</li>
<li><strong><code>rm filename</code></strong> deletes a file</li>
<li><strong><code>rmdir directoryname</code></strong> deletes an empty directory</li>
<li><strong><code>rm -rf directoryname</code></strong> deletes a directory and all files and subdirectories in it</li>
<li><strong><code>tail</code></strong> <em>filename</em> will write out the last few lines of that file - great got log files you want to peek at. Use <code>-n 500</code> to see the last 500 lines. Use <code>-f</code> and it will sit there and write anything out that hits the file in real time. Example: <code>tail /var/log/node.log -n 500 -f</code> Use <code>ctrl-c</code> to exit.</li>
<li><strong><code>cp filename anotherfilename</code></strong> copies <em>filename</em> to a new <em>anotherfilename</em></li>
<li><strong><code>mv filename newfilename</code></strong> will rename/move the file from <em>filename</em> to <em>newfilename</em></li>
</ul>

<p><strong>ssh</strong> lets you connect to your virtual server and run a session there. Use the info DigitalOcean emailed you to connect, such as:</p>

<pre><code>ssh root@your.ip.address.here
</code></pre>

<p>Then type in the password they sent you when prompted. If you have a domain name pointed to that IP address, you can just <code>ssh root@yourdomain.com</code> as well.</p>

<p>You may be asked if you want to continue to connect, just enter <code>yes</code> as follows, and then enter your password from the email:</p>

<pre><code>MacBook-Pro:~ sean$ ssh root@162.243.238.83

The authenticity of host '162.243.238.83 (162.243.238.83)' can't be established.
RSA key fingerprint is c2:5d:e5:d0:f5:43:d8:52:73:22:fd:b6:7e:e3:ca:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '162.243.238.83' (RSA) to the list of known hosts.
root@162.243.238.83's password: 

root@nodeden:~#
</code></pre>

<p>The <code>root@nodeden:~#</code> promt above shows us we're logged into the nodeden machine as root. (nodeden is what I set up as the host name when I created the droplet at DigitalOcean.) The <code>~</code> means we're in root's home directory. A quick <code>pwd</code> shows is we are indeed in <code>/root</code>.</p>

<p><strong>Package Managers-</strong> One of the first important Linux lessons I learned when moving from Windows is how critically important package management systems are. I was initially impressed with Node's <strong>NPM</strong> package manager, but when I started to use Ubuntu's package management and Mac's <strong><a href='http://brew.sh/' >Homebrew</a></strong>, I realized that I could not work in this new world without leveraging package managers. (A full week battle trying to manually install and build GraphicsMagick on my Mac and Linux server was replaced with one command!)</p>

<p><strong>Updating Ubuntu-</strong> Once you have ssh'd into your new server as root, run these commands one at a time to upgrade Ubuntu's packages and OS using the package manager:</p>

<pre><code>apt-get update
apt-get upgrade
apt-get dist-upgrade
</code></pre>

<p>"update" will update Ubuntu's package definitions, "upgrade" will upgrade the packages, and "dist-upgrade" will do any potentially more major updates that could require a server restart.</p>

<h3 id="basiclinuxusermanagement">Basic Linux user management</h3>

<p><code>root</code> is the "super user" account. You will need to be <code>root</code> to do certain things, but it's important to set up a new account that will generally administer the server. This will also allow you to disable <code>root</code> from being able to ssh into the server - a great security improvement.</p>

<p><strong>Change the <em>root</em> user password-</strong> Even once you disable root and create a new administrative user, you will need the root password to get root access. Log in as root and run the <code>passwd</code> command to reset the root password to something you can remember and type.</p>

<p><strong>Create a new user to administer the server-</strong> You can call this user anything you want:</p>

<pre><code>adduser myadmin
</code></pre>

<p>Enter in a password for this user, and the user's name. You can enter blanks for the other questions. Now add the new user to the <code>sudo</code> group:</p>

<pre><code>usermod -aG sudo myadmin
</code></pre>

<p><strong>sudo-</strong> The <code>sudo</code> group has the ability to run most root level commands. Once your administrative user is in the sudo group, you can log in as this user and use <code>sudo</code> before commands run that command with root level access. You can also change your user to "become" root using <code>su -</code>, which means "substitute user root." You can become any user using su if you know the password. So when logged in as root, run <code>su myadmin</code> to become myadmin. <em>Note:</em> <code>exit</code> will return you back to the user you where when you ran <code>su</code>.</p>

<blockquote>
  <p>To test your new user, logout of the server using <code>exit</code> and then ssh as the new user: <code>ssh myadmin@162.243.238.83</code>.
  <strong>Important-</strong> make sure you can <code>su -</code> to get to root from this user, since we are about to disable the root ssh login. Also remember to use the new root user password you set up above!</p>
</blockquote>

<h3 id="disablerootsshlogin">Disable root ssh login</h3>

<p>ssh into your server as root. To disable the ability for anyone to ssh as root, edit the <strong>sshd_config</strong> file using the <strong>vi</strong> editor:</p>

<pre><code>vi /etc/ssh/sshd_config
</code></pre>

<p>Here is a nice overview on using vi to edit files: <a href='http://www.cs.colostate.edu/helpdocs/vi.html' >Basic vi Commands</a></p>

<p>Basically, use the down arrow to get to the line:</p>

<pre><code>PermitRootLogin yes
</code></pre>

<p>Arrow to the right to put the cursor on the "y". Now press the <code>i</code> key to enter insert mode. Then you can type "no" and use the right/left arrow keys and the delete key to delete the "yes". The line should now look like this:</p>

<pre><code>PermitRootLogin no
</code></pre>

<p>Now use the <code>esc</code> key to get out of insert mode. Type the <code>:</code> key to get to the command line at the bottom of the vi editor and type <code>wq!</code> and then <code>return</code> which means, "<strong>w</strong>rite the file, <strong>q</strong>uit, and <strong>!</strong> don't complain on exit." (Yes, vi takes a bit to get used to.)</p>

<p>Now before we can test the change, we need to restart the ssh service:</p>

<pre><code>restart ssh
</code></pre>

<p>Now log off using <code>exit</code> and try to ssh to your server as root. It should fail. To log in, just ssh as the administrator user you created, then you can <code>su -</code> to become root whenever you need.</p>

<blockquote>
  <p>Note: There are some commands and tasks you can not accomplish using only <code>sudo</code>, sometimes you must <code>su -</code> to become root. I find it best to try everything using <code>sudo</code> first, then fall back to <code>su -</code> as root when required. This helps your learn what tasks you must use root for.</p>
</blockquote>

<h3 id="changethesshport">Change the ssh port</h3>

<p>ssh runs on port 22 by default. There are many debates about whether it is more or less secure to move it to a different port to gain security through obscurity. I choose to change it to limit the endless attacks you will see in your auth.log. With ssh moved to a different port, it's much easier to review the auth.log to review potential attacks. It's not essential, but if you want to change your ssh port:</p>

<p>Change to root <code>su -</code> and edit the sshd_config file:</p>

<pre><code>vi /etc/ssh/sshd_config
</code></pre>

<p>Just change the line right near the top where it sets the port number:</p>

<pre><code># What ports, IPs and protocols we listen for
Port 22
</code></pre>

<p>Use any open port over 1024 and below 65537. Save your update and restart ssh using <code>/etc/init.d/ssh restart</code>.</p>

<p>Now <code>exit</code> and try to ssh again. It should fail. Now here is how you ssh using a different port, such as 50231 in this case:</p>

<pre><code>ssh myadmin@162.243.238.83 -p 50231
</code></pre>

<h3 id="keyubuntudirectoriesandlogs">Key Ubuntu directories and logs</h3>

<p>It helps to understand where things are and what patterns other developers follow. Different technology platforms have their own somewhat standard/expected approaches. Newer technologies can mix and match these standards. For basic use, I have found the following few items to be helpful:</p>

<ul>
<li>Note that each system user gets a home directory. When you ssh, you start out in the user's home directory - the <code>~</code> points to this home directory when you are logged in as a given user. So when I ssh as myadmin, I start off in <code>/home/myadmin</code>, or <code>/~</code>.</li>
<li>The root user home directory is <code>/root</code></li>
<li>If you plan to use a Git deployment process, create a user called <code>git</code> who will have permissions to push app updates to the server and deploy them to production. Leverage the git user's home directory and keep the git repository directory there to make the git url simple.</li>
<li>Key logs are in <code>/var/log/</code>
<ul><li><code>tail auth.log -n 500 -f</code> Review auth.log - security access</li>
<li>Nginx will have a directory for logs here</li>
<li>You can log your node apps here</li>
<li>/<em>var</em> refers to "variable" because these files can grow a lot and need to be managed/archived.</li></ul></li>
</ul>

<h3 id="copyfilestofromyourserver">Copy files to/from your server</h3>

<p>Use <code>scp</code> to <strong>S</strong>ecure <strong>C</strong>o<strong>P</strong>y files. <code>scp</code> allows you to specify your ssh logon credentials along with the server path. For example, let's copy a database backup file from the DigitalOcean VM to my local Mac development machine:</p>

<pre><code>scp myadmin@162.243.238.83:/home/myadmin/backup.sql /Users/sean/myproject/backup.sql
</code></pre>

<p>If you have moved your ssh port, you need to jump through the extra hoop of applying the <code>-P</code> argument to use the correct port:</p>

<pre><code>scp -P 50231 myadmin@162.243.238.83:/home/myadmin/backup.sql /Users/sean/myproject/backup.sql
</code></pre>

<p>You can also copy files from your laptop to the server as well:</p>

<pre><code>scp -P 50231 /Users/sean/myproject/backup.sql myadmin@162.243.238.83:/home/myproject/backup.sql
</code></pre>

<hr />

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Next post I will talk about <a href='http://seanvbaker.com/setting-up-a-node-website/' >Setting up a Node.js website</a>.</p>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/starting-linux-for-windows-developers/</link><guid isPermaLink="false">28203227-7bd7-4d3a-833e-c7510ab40996</guid><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Wed, 11 Dec 2013 20:11:41 GMT</pubDate></item><item><title><![CDATA[A Node.js web app platform recipe]]></title><description><![CDATA[<p>I have good memories from times I was starting with a new technology. From the first "Hello World" to the first "Oh, I get it now" sensation, learning a new platform can really bring back that inspired feeling of empowerment that first sucked you into the esoteric world of software development.</p>

<p>But committing to a new technology has its risks. To really learn a new approach takes time. To gain the benefit that comes from experience, you know you will run into road blocks, and inevitably, you will make mistakes. But in the end, most of us know that in the world of technology, not looking forward carries the most risk of all.</p>

<blockquote>
  <p>We all wonder- where are things going? But more importantly, we should wonder- what is the direction things <strong><em>ought</em></strong> to go?</p>
</blockquote>

<h3 id="whynodejs">Why Node.js?</h3>

<p>I first heard about Node.js as many probably have... from secondhand tall tales about some new amazingly fast server platform that uses JavaScript of all things. Tantalizing. And folks who use it seem somehow contented, in a pleasantly bemused sort of way.</p>

<p><strong>I started with Node.js because I was looking for:</strong></p>

<ul>
<li>A new non-IDE server-side solution</li>
<li>A server with no state management overhead built in</li>
<li>A like-minded developer community</li>
<li>A new technology that was gaining acceptance</li>
</ul>

<p><strong>I adopted Node.js because I found:</strong></p>

<ul>
<li>An amazingly powerful server programming environment</li>
<li>Exemplary package management implementation</li>
<li>A huge library of very helpful well-written packages</li>
<li>A talented helpful community</li>
<li>A sparse default configuration - add modules as needed</li>
<li>Very low-latency (fast!) response times</li>
<li>The magic of the Asynchronous paradigm</li>
</ul>

<h3 id="linuxvswindows">Linux vs. Windows</h3>

<p>I spent the first 6 months using Node.js in a Windows environment. It works, but you are constantly swimming upstream. Don't fight it. Go Linux. The whole idea of Node.js is a Linux approach. Node is a great excuse to start out with Linux if you've been a good Windows soldier up to now.</p>

<p>When it comes to flavors of Linux, like many others, I quickly realized that <strong>Ubuntu</strong> is ideal for application hosting. It is also the mostly widely supported and documented OS for this purpose. (I find it less IT and network administrator focused and more developer/app hosting focused than others.)</p>

<h3 id="nodejshosting">Node.js Hosting</h3>

<p>I skipped more app-centric services such as <a href='https://www.heroku.com/' >Heroku</a>. The appliance app hosting approach (such as is also offered by Azure) may be a great approach for scalable mobile app back-end servers. But for more robust solutions, you will want to take advantage of the inherent power Node.js offers through the Linux OS and other software you install on the server. (I also find value in learning on a dedicated VM first, even if you plan to leverage Node specific appliance hosting in the future.)</p>

<p><strong>Azure-</strong> I started with Microsoft Azure. At first, this seemed surprisingly nice. The portal seemed simple and elegant, and they seemed to be supporting Node.js well via <a href='https://github.com/tjanczuk/iisnode/wiki' >iisnode</a>. Azure also allows you to set up Linux "Virtual Machines" and install Node yourself. But after a year, I moved away from Azure because:</p>

<ul>
<li>Azure is becoming more complex and Network/IT focused</li>
<li>It is more expensive for open source hosting</li>
<li>Azure's subscription/payment process is poor
<ul><li>It's easy to lose production servers for silly reasons</li>
<li>Their customer service is unreliable</li></ul></li>
</ul>

<p>Also make sure you understand how Azure throttles your cheaper hosted VM's and cloud databases. I did find the performance of their dedicated VM's to be very consistant though, with reliable network connectivity and up time.</p>

<p><strong>Joyent-</strong> I used <a href='http://www.joyent.com/' >Joyent</a> for a while:</p>

<ul>
<li>Amazing technology - I plan to use for performance-critical solutions</li>
<li>Great team - very supportive and intelligent - wow</li>
<li>Note that their <a href='http://smartos.org/' >SmartOS</a> is custom Linux:
<ul><li>Some subtle changes to what you're used to</li>
<li>Not easy to re-host elsewhere</li>
<li>Worth it if you need speed / advanced performance monitoring</li>
<li>Plays VERY well with Node.js</li>
<li>A tad more expensive, but great value</li>
<li>Billing process is not as mature as it should be</li></ul></li>
</ul>

<p><strong>DigitalOcean-</strong> I currently use <a href='https://www.digitalocean.com/?refcode=fb638d4235b4' >DigitalOcean</a> and have been very impressed:</p>

<ul>
<li>DigitOcean is so simple and elegant - I love it</li>
<li>Very cost effective for open source hosting</li>
<li>All SSD (Solid-State Drive)</li>
<li>Great auto VM backup process</li>
<li>Ability to <a href='https://www.digitalocean.com/community/articles/how-to-set-up-and-use-digitalocean-private-networking' >network your virtual servers</a></li>
</ul>

<h3 id="databaseanddbaccess">Database and DB Access</h3>

<p>Reliable relational database interfacing is critical for my endeavors. I have also noticed that relational database use can tend to be under appreciated by newer developers. SQL is more than just storing and retrieving data- it can also be an incredibly powerful element of your process design and functionality.</p>

<p><strong>Microsoft SQL-</strong> is cost effective on Azure. In this context, however, you lose some of the benefits of the enterprise product that I have become accustomed to.</p>

<p><strong>MySQL-</strong> is cost effective everywhere of course. It was also a fairly easy process to come up to speed with MySQL from MSSQL, and the performance has been great. Enterprise tasks such as backing up and performance monitoring can be a challenge though.</p>

<p><strong>node-sqlserver-</strong> <a href='https://github.com/WindowsAzure/node-sqlserver' >node-sqlserver</a> is a Node.js driver for Microsoft SQL server. I spent a fair amount of time trying to get this driver working. They were (and still are...) in the early stages of development, so building the driver is a challenge. I was able to succeed only in an Azure Windows Server VM. It seems like very few (if any) folks have been able to get it working in the Azure "Web Site" appliance environment. And there are no plans yet to create a Linux version. (The Windows Server vision did work quite well for me though.)</p>

<p><strong>node-mysql-</strong> <a href='https://github.com/felixge/node-mysql' >node-mysql</a> created by Felix Geisendörfer is a great Node.js module. It is a "pure node.js JavaScript Client implementing the MySql protocol." That it is, and it works wonderfully too. The <code>escape</code> feature is very handy for handling SQL injection issues and automatic date conversion between JavaScript and MySQL (very helpful!). It also supports connection pooling and database transactions.</p>

<p><strong>NoSQL-</strong> I have not implemented a major NoSQL project yet - so I can't speak to that. (I know many Node.js platforms tightly integrate NoSQL database support.) </p>

<h3 id="versioncontrolanddeployment">Version Control and Deployment</h3>

<p>Coming from the word of <strong>Microsoft Visual Source Safe</strong>, learning <strong>Git</strong> was rather intimidating at first. I ended up focusing on the deployment and code management benefits over the version control and team development use. This proved to be a great way to get started with Git - and I could not imagine life without it now.</p>

<h3 id="email">Email</h3>

<p>You will likely need to send emails out from your Node server. I spent longer than I care to recount setting up <a href='http://www.postfix.org/' >Postfix</a> SMTP on my VM. In the end, it was not too bad to install and use once I figured it out - but I learned that the IP address you get assigned by your hosting provider is often already blacklisted for email sending - so your emails will end up in the recipient's spam folder. You can clean the IP address, but it takes time and effort.</p>

<p>I took a cue from <a href='https://ghost.org/' >Ghost</a> and tried <a href='http://www.mailgun.com/' >Mailgun</a>, a cloud email service created for developers. <strong>Mailgun</strong> is an amazing service for developers. It not only worked great, but is also very cost effective (free up to 10,000 emails per month.) Their API looks great, they have a simple to use inbound email forwarding filter, great email DNS configuration/troubleshooting tools, and awesome web-based logging.</p>

<p>I used the <a href='https://github.com/eleith/emailjs' >emailjs</a> Node module to send to Mailgun from my node server with great success.</p>

<h3 id="runningmultiplenodejsappsonasingleserver">Running multiple Node.js apps on a single server</h3>

<p>At first I stuck with one Node app per server. But as soon as I found myself needing to host several smaller dedicated domain node web sites on one VM, I needed to find a solution. I was used to using IIS with a multi-homed Windows Server to server multiple web sites on one server.</p>

<p><strong>Reverse Proxy approach-</strong> <a href='http://nginx.com/' >Nginx</a> is often used in front of Node.js, not only to host multiple Node sites, but at times to also serve static files and/or implement SSL. I was surprised how easy it was to install and configure Nginx. I still use Node.js to implement SSL and static file serving for the most part, but Nginx has certainly become a key layer in my Node stack now.</p>

<h3 id="serversideimageprocessing">Server-side image processing</h3>

<p>Getting <a href='http://www.graphicsmagick.org/' >GraphicsMagick</a> installed and working on the server has allowed me to implement fancy UI functions I never considered before. The Node.js GraphicsMagick wrapper <a href='https://github.com/aheckmann/gm' >gm</a> makes server-side image processing as easy as possible - although the actual exposed GraphicsMagick commands are rather tricky to get up to speed with, they are incredibly powerful.</p>

<h3 id="blogintegration">Blog integration</h3>

<p>If you want to integrate a nice Node.js-based blog into your site, look into <a href='https://ghost.org/' >Ghost</a>. Ghost is a very new open source blog platform built with Node.js. (You're reading a Ghost blog now.) It uses <a href='http://handlebarsjs.com/' >Handlebars</a> as a template engine and supports custom theme creation. It's quite simple to use their theme approach to embed the whole blog into your existing site design. You can read more about how I set it up here: <a href='http://seanvbaker.com/a-ghost-workflow/' >A Ghost Workflow</a>.</p>

<hr />

<blockquote>
  <p>For a full introduction and index to this blog: <a href='http://seanvbaker.com/node-js-one-new-approach/' >Node.js: One New Approach</a></p>
</blockquote>

<p>Next post I talk about <a href='http://seanvbaker.com/starting-linux-for-windows-developers/' >Starting with Linux for Windows developers</a>.</p>

<p>Cheers!</p>]]></description><link>http://seanvbaker.com/a-node-js-web-app-platform-recipe/</link><guid isPermaLink="false">ab0829a6-89c6-49b6-95da-794951657d82</guid><category><![CDATA[Node.js stack]]></category><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Sat, 07 Dec 2013 03:36:37 GMT</pubDate></item><item><title><![CDATA[Node.js: One New Approach]]></title><description><![CDATA[<blockquote>
  <p>From out of the torrent of emerging web technologies, it was <a href='http://nodejs.org/' >Node.js</a> that finally inspired me to drop my otherwise adequate legacy platform in favor of <em>"a new approach"</em>.</p>
</blockquote>

<p><strong>Before-</strong> When 14.4 modems still roamed the Earth, I created e-commerce sites on my Mac using Perl and uploaded them to Unix servers. User state was managed in the HTTP POST data, and persistent data was managed on the server by reading and writing static files.</p>

<p><strong>And then-</strong> I met Microsoft SQL and Active Server Pages (known as ASP classic now). It blew my mind. We called it <em>"Dynamic SQL"</em> and <em>"Dynamic HTML"</em> back then. We used JavaScript in hidden frames to accomplish what Ajax does today. Google's original Gmail web app was <a href='http://koranteng.blogspot.com/2004/10/on-gmail-and-dhtml-architecture-again.html' >considered a "Dynamic HTML" solution</a> before "DOM manipulation" became the new phraseology.</p>

<p><strong>Now-</strong> The next step. With HTML5, user state moves to the client. The server becomes RESTful, servicing autonomous requests. A more natural service-oriented architecture is emerging - not unwieldy XML Web Services, but lightweight JSON RESTful services.</p>

<h3 id="thisblog">This blog</h3>

<p>There are many good introductions to Node.js, and even more awesome examples of how modules can be implemented. But there are too few examples showing complete implementations of Node.js being used as a new server platform to support the HTML5 application paradigm.</p>

<p>I created <em>The Node Den</em> to post my notes and share my experiences creating a Node.js e-commerce site that includes many of the fundamental features critical to most new website applications.</p>

<h3 id="theguineapig">The Guinea Pig</h3>

<p><a href='http://www.pamperyourpoultry.com/' >Pampered Poultry</a> is a real world e-commerce site I created a while back using Microsoft ASP Classic and MS SQL server. It ran on a Windows Server VM hosted on Azure. The site included a back-end operations/admin site to facilitate order processing and catalog management.</p>

<p>The goals of the new rebuilt site would be:</p>

<ul>
<li>Simplicity- Limit superfluous technology</li>
<li>Low latency- A quick responding site</li>
<li>Modern user experience- but good cross-browser support</li>
<li>Manageability- Efficient deployment and malleable code base</li>
<li>Low cost- This is a business site after all!</li>
</ul>

<p>To accomplish these goals, I settled on the following architecture:</p>

<ul>
<li>HTML5 approach with user state on the client</li>
<li>jQuery- it's like SQL for the DOM!</li>
<li>MySQL database- transactional, fast, and free...</li>
<li>Node.js RESTful services to handle data I/O</li>
<li>Node.js serves static content for simplicity</li>
<li>Ubuntu Linux- fast, cost effective, and plays well with others</li>
<li>Git code management and deployment- a good workflow is key</li>
</ul>

<p><img src='http://seanvbaker.com/content/images/2013/Dec/pyp_site.png'  alt="" /></p>

<h3 id="blogindexroadmap">Blog Index / Roadmap</h3>

<p>Here is what I plan to cover. I will link each topic as they're published.</p>

<blockquote>
  <p>For reference, the complete site code base is available on Github:</p>
  
  <p><a href='https://github.com/svbaker/pyp2-site' >https://github.com/svbaker/pyp2-site</a></p>
</blockquote>

<p><strong>I. Introduction and Orientation</strong></p>

<ul>
<li><a href='http://seanvbaker.com/a-node-js-web-app-platform-recipe/' >A Node.js web app platform recipe</a>
<ul><li>Lessons learned</li>
<li>Recommendations</li></ul></li>
<li><a href='http://seanvbaker.com/starting-linux-for-windows-developers/' >Starting Linux for windows developers</a>
<ul><li>What you need and nothing more</li>
<li>Program your OS!</li></ul></li>
<li><a href='http://seanvbaker.com/setting-up-a-node-website/' >Setting up a Node.js website</a>
<ul><li>Setting up Node and Nginx</li>
<li>Run your node app as a service</li>
<li>Use Nginx to host multiple Node sites</li></ul></li>
<li><a href='http://seanvbaker.com/using-git-to-deploy-node-js-sites-on-ubuntu/' >Using Git for deployment</a>
<ul><li>So much better than FTP hell</li></ul></li>
</ul>

<p><strong>II. Starting with Node.js</strong></p>

<ul>
<li>Understanding Node.js
<ul><li><a href='http://seanvbaker.com/understanding-node-js/' >Moving from IIS and Apache</a></li>
<li><a href='http://seanvbaker.com/why-async' >Why async?</a></li></ul></li>
<li><a href='http://seanvbaker.com/using-recursion-to-tame-callback-hell/' >Using recursion to tame callback hell</a></li>
</ul>

<p><strong>III. <a href='http://seanvbaker.com/the-html5-approach/' >The HTML5 Approach</a></strong></p>

<ul>
<li><a href='http://seanvbaker.com/moving-user-state-to-the-browser' >Moving user state to the browser</a>
<ul><li>Cookies, Local Storage, and JSON</li>
<li>Browser considerations</li></ul></li>
<li>App UI design in a content driven web
<ul><li>CSS, jQuery, and Ajax</li>
<li>Search Engine Optimization (SEO)</li>
<li>Favicon, Meta tags, Mobile support</li>
<li>Retina support</li></ul></li>
</ul>

<p><strong>IV. The Node.js HTML5 App Server</strong></p>

<ul>
<li>Setting up a Node.js server using Connect
<ul><li><a href='http://www.senchalabs.org/connect/' >Connect</a> and its key middleware</li>
<li>Logging, redirecting, and error trapping</li>
<li>Serving static files</li>
<li>Managing environment configuration data</li>
<li>Creating Node.js modules</li>
<li>A Pattern for Ajax service handlers</li></ul></li>
<li>Sending email from Node.js
<ul><li>Using <a href='http://www.mailgun.com/' >mailgun</a> and <a href='https://github.com/eleith/emailjs' >emailjs</a></li>
<li>Hosting your own SMTP</li></ul></li>
<li>Setting up SSL in Node.js
<ul><li>Creating/managing SSL Certificates and keys</li>
<li>Implementing the SSL server</li>
<li>Redirecting SSL and non-SSL traffic</li></ul></li>
</ul>

<p><strong>V. Node.js Ajax services with JSON and MySQL</strong></p>

<ul>
<li>The power of JSON and JavaScript
<ul><li>Using JSON with jQuery and Node.js</li>
<li>Patterns for productive database interfacing</li></ul></li>
<li>Data processing in Node.js with node-mysql
<ul><li>Database connection management</li>
<li>Security considerations</li>
<li>Serving requests for data</li>
<li>Reliable database updates: transactions</li>
<li>Error trapping</li></ul></li>
<li>Encrypting sensitive data using Node.js</li>
</ul>

<p><strong>VI. Implementing an operations/admin site</strong></p>

<ul>
<li>An HTML5 operations site architecture
<ul><li>Shell layout and DOM organization</li>
<li>Security management</li>
<li>Efficient data management patterns</li></ul></li>
<li>An Etsy-like image uploader using Node.js
<ul><li>Graphics-based UI</li>
<li>A Node.js multiple file upload process</li>
<li>Scale images on the server using <a href='http://www.graphicsmagick.org/' >GraphicsMagick</a></li>
<li>Save original and thumbnail images to disk</li>
<li>Save image upload info to the database</li></ul></li>
</ul>

<p><strong>VII. Final Touches and Tweaks</strong></p>

<ul>
<li>HTTP Compression and Caching in Node.js</li>
<li>Using Nginx to host multiple sites</li>
<li>Integrating a Ghost blog into a Node.js site</li>
<li>Cross browser testing</li>
<li>MySQL backup process</li>
</ul>

<p><img src='http://seanvbaker.com/content/images/2013/Dec/pyp_ops.png'  alt="" /></p>]]></description><link>http://seanvbaker.com/node-js-one-new-approach/</link><guid isPermaLink="false">426ca30c-d4d2-4f20-8032-85cb68edfa44</guid><category><![CDATA[Node.js]]></category><category><![CDATA[e-commerce]]></category><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Mon, 02 Dec 2013 19:29:36 GMT</pubDate></item><item><title><![CDATA[A Ghost Workflow]]></title><description><![CDATA[<blockquote>
  <p>Host multiple Ghost blog sites on the same server. Develop and deploy your theme updates with ease. Most expeditious. </p>
</blockquote>

<p>I wanted to start using <a href='https://ghost.org/' >Ghost</a> for my blogs. For those who don't know, Ghost is a spiffy new Open Source platform solely created to manage and publish blogs. I was drawn to Ghost because it is a Node.js application, and because it uses <a href='http://daringfireball.net/projects/markdown/' >Markdown</a> for publishing.</p>

<p>This guide shows one way to set up a Ghost blog dev environment to</p>

<ul>
<li>host multiple blogs sites on a single server and</li>
<li>use Git to deploy your theme updates.</li>
</ul>

<p>I am using the following dev/hosting platforms:</p>

<ul>
<li>I develop on a Mac</li>
<li>I host production on an Ubuntu 13.04 x64 VM at <a href='https://www.digitalocean.com/?refcode=fb638d4235b4' >DigitalOcean</a></li>
</ul>

<h4 id="overviewoftheenvironment">Overview of the Environment:</h4>

<ul>
<li>Mac development has Node.js and Ghost app installations</li>
<li>Production Linux VM uses Nginx to route requests to the respective Ghost blog Node apps</li>
<li>Production node apps run as Upstart services to stay up</li>
<li>Each blog app has its own Ghost installation</li>
<li>Git tracks only custom theme files or code edits</li>
<li>Git deploy hooks in production allow one command deployments for each blog site</li>
</ul>

<hr />

<h3 id="installghostondevelopmentmac">Install Ghost on Development Mac</h3>

<p>Be sure you have <a href='http://nodejs.org/' >Node.js</a> installed first.</p>

<p>Sign up for Ghost: <a href='https://ghost.org/signup' >https://ghost.org/signup</a></p>

<p>Follow their instructions to download and extract the zip file.</p>

<p>If you plan to manage multiple blogs, it will help if you stay consistent with where you put your blog projects. I keep all mine right off the root of my user folder. So for this example, move all the extracted ghost files to <code>~/yourblog</code>.</p>

<p>Now install (build) the ghost app:</p>

<pre><code>cd ~/yourblog
npm install --production
</code></pre>

<p>Then rename the <em>config.example.js</em> to <em>config.js</em> to avoid any confusion. (You can keep the config.example.js file for reference if you rather - Ghost will create the config.js file from it the first time you start Ghost.)</p>

<p>The <em>config.js</em> file is used to manage your specific environment settings. Ghost keeps separate settings for development and production environments here. When you start the Ghost app, you pass an argument to control which environment settings you want Ghost to use.</p>

<p>Edit the development section of the <em>config.js</em> file to assign an open port you want to use. Ghost defaults to <em>2368</em>. I find it easiest to use the same port for development and production. So here I change the port to <em>8000</em> for this example:</p>

<pre><code>development: {
    // The url to use when providing links to the site, E.g. in RSS and email.
    url: 'http://localhost:8000',

    database: {
        client: 'sqlite3',
        connection: {
            filename: path.join(__dirname, '/content/data/ghost-dev.db')
        },
        debug: false
    },
    server: {
        // Host to be passed to node's `net.Server#listen()`
        host: '127.0.0.1',
        // Port to be passed to node's `net.Server#listen()`, for iisnode set this to `process.env.PORT`
        port: '8000'
    }
</code></pre>

<p>So while your at it, let's change the production section as well. Also, set the <code>url</code> to match the domain of your blog:</p>

<pre><code>production: {
    url: 'http://yourblogdomain.com',
    mail: {},
    database: {
        client: 'sqlite3',
        connection: {
            filename: path.join(__dirname, '/content/data/ghost.db')
        },
        debug: false
    },
    server: {
        // Host to be passed to node's `net.Server#listen()`
        host: 'localhost',
        // Port to be passed to node's `net.Server#listen()`, for iisnode set this to `process.env.PORT`
        port: '8000'
    }
},
</code></pre>

<p>Now test the install (<code>npm start</code> uses <code>--development</code> by default):</p>

<pre><code>cd ~/yourblog
npm start
</code></pre>

<p>Open a web browser to <code>http://localhost:8000</code> to verify Ghost is running.</p>

<blockquote>
  <p>To learn about setting up and switching Ghost themes, visit the <a href='http://docs.ghost.org/themes/' >Ghost documentation</a>.</p>
</blockquote>

<p><strong>Troubleshooting</strong>: Watch the terminal's live logging to see what might be causing any potential issues.</p>

<h4 id="initiateyourcustomthemes">Initiate Your Custom Theme(s)</h4>

<p>If you're developing your own theme, copy the default Casper theme into your own theme folder:</p>

<pre><code>cd ~/yourblog/content/themes
cp -r casper yourtheme
</code></pre>

<h4 id="setupgitversioncontrol">Setup Git Version Control</h4>

<p>Make sure you have Git installed: <a href='http://git-scm.com/download/mac' >git-scm.com</a> or <a href='http://brew.sh/' >Homebrew</a>, etc.</p>

<p>Now go to your project <code>cd ~/yourblog</code> and run <code>git init</code> to create a local repository for the blog site.</p>

<p>You will have different Ghost files installed in production (due to the binaries and such created during installation), so you want to be sure to only track code changes to your theme files, or any additional Ghost core files as you may need to edit. To accomplish this, use a <code>.gitignore</code> file to filter out all files except those you want to manage and deploy to production.</p>

<p>This can be tricky because .gitignore is optimized to ignore files, not include them. </p>

<p>In your <code>~/yourblog</code> directory, add a new file named <code>.gitignore</code> and use something like this:</p>

<pre><code># Ignore everything first:
/*

# Then un-ignore the following:
!.gitignore

# --- Ghost core server index.js file ---
# --- Only needed if you edit this file ---
core/*
!/core/
core/server/*
!/core/server/
/core/server/*
!/core/server/index.js

# ---- Theme files -----
content/*
!/content/

content/themes/*
!/content/themes/

content/themes/yourtheme/*
!/content/themes/yourtheme/

!/content/themes/yourtheme/*
</code></pre>

<p>The trick is to first specifically ignore each directory that you then re-include files from.</p>

<p>You can test your .gitignore file by running this in your <em>~/yourblog</em> directory:</p>

<pre><code>git add .
git commit -m "init commit"
git ls-tree --full-tree -r HEAD
</code></pre>

<p>This will add all the trackable files and the last command show you the files that were added and are now tracked by Git. You should only see the files you expect. If not, edit your <em>.gitignore</em> file until it works as required for your needs. You don't want the core Ghost files (unless you asked for some specifically). You want to also make sure it is picking up the files you plan on developing and deploying. To "un-track" a file in Git that was added by accident, use <code>git rm --cached &lt;file&gt;</code>.</p>

<hr />

<h3 id="createandconfigurevmserver">Create and Configure VM Server</h3>

<p>I use <a href='https://www.digitalocean.com/?refcode=fb638d4235b4' >DigitalOcean</a> and create a Linux VM using the latest version of Ubuntu.</p>

<p>ssh into your VM, then update Ubuntu:</p>

<pre><code>apt-get update
apt-get upgrade
apt-get dist-upgrade
</code></pre>

<h4 id="installgit">Install Git:</h4>

<pre><code>apt-get install git
</code></pre>

<p>Create your git user who will deploy updates to production. I call the user <code>git</code>:</p>

<pre><code>adduser git
</code></pre>

<h4 id="installnodejs">Install Node.js:</h4>

<pre><code>apt-get install python-software-properties python g++ make
apt-get install software-properties-common
add-apt-repository ppa:chris-lea/node.js
apt-get update
apt-get install nodejs
</code></pre>

<h4 id="installghost">Install Ghost:</h4>

<p><strong>You want to log in or change your user to <code>git</code> for the following steps.</strong> This will help ensure that the permissions settings allow git user to deploy files to production when we get to that step. Also create a directory to hold the production blog app.</p>

<blockquote>
  <p>Putting the git repository in the git user home directory (<code>/home/git</code>) makes for a nice easy path when we set up the remote git repo in development.</p>
</blockquote>

<p><strong>Note: Be sure to use the latest version of ghost where I have <code>ghost-0.4.0.zip</code> for this example.</strong></p>

<pre><code>su git

cd /home/git
mkdir yourblog
mkdir tmp
cd tmp
wget https://ghost.org/zip/ghost-0.4.0.zip
unzip -uo ghost-0.4.0.zip -d /home/git/yourblog
cd /home/git/yourblog
npm install --production
mv config.example.js config.js
</code></pre>

<p>Note: If unzip is not installed yet, log in as root <code>su -</code> and install it: <code>apt-get install unzip</code></p>

<p>If you want to test your Ghost install at this point (before getting to Nginx), remember to edit the <em>config.js</em> file to add the correct host and port settings for your server. For a quick test, change the port to 80, run <code>npm start --production</code> from your <em>/home/git/yourblog</em> directory, then point a web browser to your server and verify that you see the default Ghost blog page.</p>

<p><strong>Troubleshooting</strong>: As in development, hopefully the interactive logging will show you any issues.</p>

<h4 id="installnginx">Install Nginx</h4>

<p>This will allow you to run multiple Ghost blog apps on the server. We will use Nginx as a reverse proxy to route user requests to the appropriate Node.js app.</p>

<p>Log in as <em>root</em> for this step:</p>

<pre><code>su -

apt-get install nginx
service nginx start
</code></pre>

<p>Test Nginx is running now by visiting your server in a web browser. You should see the Nginx splash screen.</p>

<h4 id="configurenginx">Configure Nginx</h4>

<p>For each of the Ghost blog sites you want to host, create an Nginx <em>.conf</em> file. You can use any name for the .conf file, but matching your blog name will help avoid confusion:</p>

<pre><code>cd /etc/nginx/conf.d
cat &gt; yourblog.conf
</code></pre>

<p>Now paste in this file content, modified for your setup:</p>

<pre><code>server {
    listen 80;

    server_name yourblogdomain.com www.yourblogdomain.com;

    location / {
        proxy_pass http://localhost:8000;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection 'upgrade';
        proxy_set_header Host $host;
        proxy_cache_bypass $http_upgrade;
    }
}
</code></pre>

<p>The important settings to modify are the <code>server_name</code> and <code>proxy_pass</code> lines.</p>

<p>This file tells Nginx to listen for any requests on port 80 for <em>yourblogdomain.com</em> or <em>www.yourblogdomain.com</em> and route them to <code>http://localhost:8000</code> which is the host and port we set up in our Ghost <em>config.js</em> file. (You will assign different unique port numbers for each subsequent Ghost app you want to host.)</p>

<p>Note: You can also combine all your blogs in one .conf file - just put one <em>server {}</em> block after another. Also, you can use <em>*.yourdomain.com</em> to grab any host names and route them. I did have one issue where I needed to specifically add <code>mydomain.com *.mydomain.com</code> for it to route requests to mydomain.com with no host prefix.</p>

<p>To test the Nginx reverse proxy process:</p>

<ul>
<li>Restart Nginx to enable the configuration: <code>service nginx restart</code> (or use <code>nginx -s reload</code>)</li>
<li><code>cd ~/yourblog</code> and <code>npm start --production</code> to start the Ghost Node app</li>
<li>Point your web browser to yourblogdomain.com and verify you see your blog now</li>
</ul>

<p>If you map more than a few domain names, the proxy may fail and you will get something like the following error in the Nginx error log:</p>

<pre><code>could not build the server_names_hash, you should increase server_names_hash_bucket_size: 32
</code></pre>

<p>To increase the <code>server_names_hash_bucket_size</code>, edit the <em>nginx.conf</em> file using vi:</p>

<pre><code>vi /etc/nginx/nginx.conf
</code></pre>

<p>The following line may be commented out with a <code>#</code> at the front of the line. Delete the <code>#</code> to re-activate the line:</p>

<pre><code>server_names_hash_bucket_size 64;
</code></pre>

<p>Restart Nginx. If 64 does not fix the problem at first, increase it by a powers of 2, until it works (64, 128, 256, etc.)</p>

<p><strong>Troubleshooting:</strong> To better see what is going on, you can monitor the Nginx error or access logs while you try to reach the server:</p>

<pre><code>tail /var/log/nginx/access.log -f

tail /var/log/nginx/error.log -f
</code></pre>

<p><code>tail</code> will show you the last few lines of the log, and the <code>-f</code> will let you monitor any new log entries as they come in. Use <code>ctrl-c</code> to exit.</p>

<blockquote>
  <p>Another key check: Your <code>proxy_pass</code> destination in the Nginx .conf file <strong>must</strong> match the production <code>server: {host: 'XXX'}</code> host setting in your Ghost <em>config.js</em> file. If you use <em>localhost</em>, it must be used in both files. Or if you use your server ip address, it must be used in both places as well!</p>
</blockquote>

<h3 id="setupgitdeployment">Setup Git Deployment</h3>

<p>This process will let you easily push out updates to your blog theme or any custom Ghost server edits you might implement. We'll use the <em>post-receive</em> Git hook to automatically deploy changes to the Ghost app and restart the Ghost service any time we push our an update.</p>

<p>Log in as <code>git</code> user so we create the git repository with permissions that the git user will be able to use.</p>

<pre><code>su git
</code></pre>

<p>First, add a bare git repository on the server for your blog:</p>

<pre><code>cd /home/git
mkdir yourblog.git
cd yourblog.git
git --bare init
</code></pre>

<p>Next, create the <em>post-receive</em> git hook file that will deploy updates to the <em>/home/git/yourblog</em> directory:</p>

<pre><code>cd /home/git/yourblog.git/hooks
cat &gt; post-receive
</code></pre>

<p>Then paste in this file content modified for your blog path:</p>

<pre><code>#!/bin/sh
GIT_WORK_TREE=/home/git/yourblog git checkout -f
</code></pre>

<p>(<code>ctrl-d</code> to exit the cat process)</p>

<p>Change permissions on the file to allow it to be executed:</p>

<pre><code>chmod +x post-receive
</code></pre>

<p>Now go back to your local development mac and add the remote git repository:</p>

<pre><code>cd ~/yourblog
git remote add yourblogrepo_label git@yourproductionserver.com:yourblog.git
</code></pre>

<p>You will be typing <code>yourblogrepo_label</code> a lot, so use something short ;)</p>

<p>Test the deployment process on your mac. (Add the files to git and make an initial commit if you did not do so when you first set up the local git repo for this project: <code>git add .</code> and <code>git commit -m "init commit"</code>).</p>

<pre><code>cd ~/yourblog
git push yourblogrepo_label master
</code></pre>

<p>The git push command will prompt you for the password for the git user you created on the production server (unless you're using an ssh key to connect.)</p>

<p>This should push your tracked and committed code changes, and the git hook on the server should automatically deploy the updated files to the ghost app. Log into the production server and verify that your code change has now indeed been deployed to <em>/home/git/yourblog</em>.</p>

<p><strong>Troubleshooting:</strong> Be sure you can log in to your server as git: <code>ssh git@yourproductionserver.com</code>. Once you log in as git user, you should start at <em>/home/git</em> and your <em>yourblog.git</em> repository should be right in that directory.</p>

<p>There is a limitation to our deployment process at this point. If we make any updates to the Ghost node server or its code dependancies, the node app needs to be restarted in order to use the updates. We will address this next when we set up our node app as an Upstart system service.</p>

<h3 id="runningghostasaservice">Running Ghost as a Service</h3>

<p>In order to keep your Ghost blog apps running at all times, you can create an Ubuntu <em>Upstart script</em> and set up each node Ghost app as a system service which will do the following:</p>

<ul>
<li>Start the Ghost Node.js app whenever the server restarts</li>
<li>Restart the app if it should fail for some reason (a crash from a bug, memory leak, etc.)</li>
<li>Allow us to conveniently start/stop the app via system commands</li>
</ul>

<p>To set up a service for the Ghost node app, you need to be root user:</p>

<pre><code>su -
cd /etc/init

cat &gt; yourblog.conf
</code></pre>

<p>Now copy in this file content modified for your blog specifics:</p>

<pre><code>description "yourblog Ghost Node Service"
author      "Your info ifya want"
start on started mountall
stop on shutdown

respawn
respawn limit 99 5

script
    cd /home/git/yourblog
    npm start --production &gt;&gt; /var/log/yourblog.log  2&gt;&amp;1
end script

post-start script

end script
</code></pre>

<p>(<code>ctrl-d</code> to finish cat process)</p>

<p>This service script tells the server to cd to your ghost app root, start it, and log the <em>stdout</em> and <em>stderr</em> from the app to the log file you specified.</p>

<p>Test starting, stopping and restarting your service:</p>

<pre><code>start yourblog
stop yourblog
restart yourblog
</code></pre>

<p>Once started, it will keep running now. Note: if you have a major error in the app that causes a crash on startup, this script will start it over and over again 99 times (or for 5 seconds, whichever comes first) before giving up.</p>

<h4 id="enableapprestartondeployment">Enable App Restart on Deployment</h4>

<p>We can leverage the service commands to enhance our git deployment process to also restart the node app.</p>

<p>First we need to change the <em>sudoers</em> file to allow the git user to sudo as root to start and stop your blog service. We also need to stop the sudo process from prompting for a password when using sudo for this task, otherwise the automated git hook script will fail due to the prompting.</p>

<p>Add one line to the end of the sudoers file. I like to use vi, but you can use the editor of your choice. To use vi: <code>export EDITOR="vi"</code></p>

<p>Now to edit the sudoers file:</p>

<pre><code>visudo
</code></pre>

<p>It is important to add this line to the <strong><em>end</em></strong> of the file:</p>

<pre><code>git ALL = (root) NOPASSWD: /sbin/stop yourblog, /sbin/start yourblog
</code></pre>

<p>This tells the system to allow git user to sudo as root with no password prompt, but only when running the start and stop commands. Note: <code>yourblog</code> refers the service name you created- the same name of your Upstart <em>.conf</em> file. Also note that you need to use the full path for the start/stop commands. To double check your path, use <code>which start</code> to see the full path.</p>

<p><strong>Note</strong>: You need to logout and log into the server again for the sudoers file change to take effect in your ssh session.</p>

<p>To test, ssh as <code>git@yourserver.com</code></p>

<p>You should be able to stop/start the service as git user as follows:</p>

<pre><code>sudo /sbin/stop yourblog
sudo /sbin/start yourblog
</code></pre>

<p>Now update the git <em>post-receive</em> hook to stop and start the service:</p>

<pre><code>cd /home/git/yourblog.git/hooks
cat &gt; post-receive
</code></pre>

<p>Then paste this file contents in modified for your blog:</p>

<pre><code>#!/bin/sh
sudo /sbin/stop yourblog
GIT_WORK_TREE=/home/git/yourblog git checkout -f
sudo /sbin/start yourblog
</code></pre>

<p>(<code>ctrl-d</code> to finish)</p>

<p>Now when you deploy updates via git, the ghost service will restart. Make a small change to a ghost site file on your mac, commit the change, and deploy:</p>

<pre><code>cd ~/yourblog
git add .
git commit -m "test deployment process"
git push yourblogrepo_label master
</code></pre>

<p>You should see the new process id # to confirm the restart of the service - something like this:</p>

<pre><code>git push yourblogrepo_label master

git@yourblogdomain.coms password: xxxxx

Counting objects: 11, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (4/4), done.
Writing objects: 100% (6/6), 499 bytes | 0 bytes/s, done.
Total 6 (delta 2), reused 0 (delta 0)
remote: yourblogrepo_label stop/waiting
remote: yourblogrepo_label start/running, process 3469
To git@yourblogdomain.com:yourblog.git
   60f6037..ece0ca8  master -&gt; master
</code></pre>

<p>Seeing the process # gives us nice comfort that the service has indeed been restarted.</p>

<h3 id="conclusion">Conclusion</h3>

<p>Now you can develop your Ghost theme files in <em>~/yourblog/content/themes/yourtheme</em>, stage and commit the changes via Git, and deploy the updates to production with one command :)</p>

<p>When you want to add an additional blog to your server:</p>

<ul>
<li>Create a new Ghost app in development 
<ul><li>Create a new project directory</li>
<li>Install Ghost in the directory</li>
<li>Configure a unique port number in the config.js file</li>
<li>Create the local git repo for the project: <code>git init</code></li></ul></li>
<li>Create a new Ghost app in production as above
<ul><li>The app directory goes under the <em>/home/git</em> directory</li>
<li>Create a bare git repo for production: <code>git --bare init</code></li></ul></li>
<li>Configure Nginx
<ul><li>Create a <em>.conf</em> file with the new port number for this blog</li>
<li>Restart Nginx: <code>service nginx start</code></li></ul></li>
<li>Create Upstart service and enable deployment process
<ul><li>Create the Upstart <em>.conf</em> script</li>
<li>Add a new line to the sudoers file so git user can start/stop the new service</li>
<li>Create the <em>post-receive</em> file and grant execute permissions to it</li>
<li>Add a remote git repository reference to your development project</li></ul></li>
</ul>

<p>Now you can get back to development knowing you have a lightning fast and reliable way to deploy your changes to production.</p>

<p>Cheers,</p>

<p>-Sv</p>]]></description><link>http://seanvbaker.com/a-ghost-workflow/</link><guid isPermaLink="false">894ea37b-878f-4295-bd10-a5dcab57f995</guid><category><![CDATA[Ghost Blog]]></category><category><![CDATA[Node.js]]></category><category><![CDATA[Git Deployment]]></category><category><![CDATA[Nginx]]></category><category><![CDATA[ghost]]></category><dc:creator><![CDATA[Sean V Baker]]></dc:creator><pubDate>Fri, 15 Nov 2013 20:09:02 GMT</pubDate></item></channel></rss>