Building a Captcha using HTML5 Canvas

CAPTCHA (/kæp.ə/, an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”) is a type of challenge–response test used in computing to determine whether or not the user is human.[1]
The term was coined in 2003 by Luis von AhnManuel Blum, Nicholas J. Hopper, and John Langford.[2] The most common type of CAPTCHA was first invented in 1997 by two groups working in parallel: (1) Mark D. Lillibridge, Martin AbadiKrishna Bharat, and Andrei Z. Broder; and (2) Eran Reshef, Gili Raanan and Eilon Solan.[3] This form of CAPTCHA requires that the user type the letters of a distorted image, sometimes with the addition of an obscured sequence of letters or digits that appears on the screen. Because the test is administered by a computer, in contrast to the standard Turing test that is administered by a human, a CAPTCHA is sometimes described as a reverse Turing test.

This user identification procedure has received many criticisms, especially from disabled people, but also from other people who feel that their everyday work is slowed down by distorted words that are difficult to read. It takes the average person approximately 10 seconds to solve a typical CAPTCHA.[4]

Demo

On the similar lines, we are going to build a simple Captcha, which is going to validate a form before submitting it. Here is how it is going to look like

Live demo of the same can be found here: arvindr21.github.io/captcha

So, let’s get started.

Design

The Captcha module that we are going to create in this post can be tightly coupled with an existing form or can be used in a generic way.

We are going to have a container that the app owner is going to provide. The captcha and the input field are going to be created inside this container.

To generate the passcode, we are going to use  Math.random() in combination with a computed scenario of uppercase or lowercase letters.

The input field will be publishing two events on keyup.  captcha.success when the validation of text box value matches the generated passcode &  captcha.failed when the values do not match. The “form owner” is going to listen to these events and work with it to get to know if the captcha is valid or invalid.

In the sample form we are building, we are going to enable or disable the submit button based on the above events.

And we are going to style the text in canvas as well to add some randomness.

So, let’s get started.

Setup Simple Login Form

We are first going to setup a simple login form using Twitter Bootstrap 4. We are going to use Signin Template for Bootstrap for the same.

Anywhere on your machine, create a folder named  login-form-with-captcha and inside that create a new file named  index.html.

Update  index.html  as shown below

I have borrowed the structure and styles from Signin Template for Bootstrap and built the above page.

As you can see, the Sign in button is disabled on load. We will enable the button once the Captcha is valid. We will do this in the later sections.

If you open the page in the browser, it should look something like


A simple form for a simple job!

Setup Captcha

Now, we are going to setup the container where the Captcha is going to appear. Just above the button tag,  we will add the Captcha container

Next, we are going to create a file named  captcha.js in the  login-form-with-captcha folder and then we will link the same to  index.html  just after the form tag as shown below

The basic  captcha.js would be structured this way

The above code is an IIFE with a Module pattern.

If we save the above file and refresh the browser, we should see the following in the console

Now, we will flesh out the remaining code.

Develop Captcha module

Now that we have a basic structure, we will continue building the plugin. First we are going to setup the  init()

Init()

In  init(), we are going to do the following

  1. Get the captcha container
  2. If container does not exists, exit the initialisation
  3. Create a canvas element & an input text element
  4. Setup Canvas
  5. Setup Textbox
  6. Bind events on Textbox
  7. Generate Passcode
  8. Set Passcode
  9. Render component on page

The Updated  init() would look like

setupCanvas()

We are going to setup a canvas element as shown below

We are setting up a canvas with the width of the parent container and height of 75px.

setupTB()

The text box will be setup as shown below

We have updated the type of input field and added a className and a placeholder. Next, we are going to bind events to the input field. The .are the events that the parent application consuming this captcha would be listening to.

bindTBEvents()

On keyup, we are going to validate the passcode against the input field

We have created 2 events

  1. When the validation is successful
  2. When the validation failed

Pretty straight forward.

generatePasscode()

Below is the logic to generate a random passcode.

Scripting to the rescue!

setPasscode()

To  setPasscode() , we are going to add two more helper functions that create a random font color for a character.

And then finally

render()

Now that we have all the pieces ready, we will render the canvas and text box on the page

This concludes our Captcha module.

The final  catpcha.js would be as follows

Setup Validation

Now, we are going to setup the validation on the form. After including  catpcha.js in the  index.html

The complete  index.html would be as follows

Save all the files and when we refresh the page in the browser, we should see

You can find the above code here: arvindr21/captcha.


Thanks for reading! Do comment.
@arvindr21

Build & Publish a Node Package Module (NPM)

In this post, we are going to build and publish a Node Package Module (NPM). The module we are going to build is a simple utility which will take a  Web supported color name as an argument and spits out the HEX & RGB values for the same. We are going to use a Test Driven Development (TDD) style of coding for building this module.

I have taken the color names from htmlcolorcodes.com – Thanks to them!

Before we get started, here is a quick preview of the CLI version in action

The story behind rebeccapurple. RIP Rebecca Meyer.

Getting Started

Below are the steps we are going to follow

  1. Setup Yeoman
  2. Install Generator
  3. TDD Colorcode
  4. Validate Code Coverage
  5. Push to Github
  6. Setup Travis CI
  7. Setup Coverall
  8. Publish to NPM

Setup Yeoman

If you are new to Yeoman, Yeoman is

What is Yeoman?

Ripping off from yeoman.io

Yeoman helps you to kickstart new projects, prescribing best practices and tools to help you stay productive.

To do so, we provide a generator ecosystem. A generator is basically a plugin that can be run with the yo command to scaffold complete projects or useful parts.

Through our official Generators, we promote the “Yeoman workflow”. This workflow is a robust and opinionated client-side stack, comprising tools and frameworks that can help developers quickly build beautiful web applications. We take care of providing everything needed to get started without any of the normal headaches associated with a manual setup.

With a modular architecture that can scale out of the box, we leverage the success and lessons learned from several open-source communities to ensure the stack developers use is as intelligent as possible.

As firm believers in good documentation and well thought out build processes, Yeoman includes support for linting, testing, minification and much more, so developers can focus on solutions rather than worrying about the little things.

So ya, Yeoman is like your File > New > NPM project | fill the project information and boom! A NPM project gets generated; but from command line.

To install Yeoman globally, run

And to validate, run

Install Node.js

Before we go further, make sure you have Node.js installed on your machine. You can follow this post Hello Node for same. I am using the following versions

generator-np

In this post, we are going to use a Yeoman generator named generator-np to scaffold our base for building the NPM. To install the generator globally, run

Once the installation is done, we are good to scaffold our app

Scaffold color2code

Anywhere on your machine, create a folder named color2code  and open a new terminal/prompt there. Now, run

And you can answer the questions as shown below

This will take a moment to complete scaffolding.

Once the project scaffolded, we should see something like

In this project, we are going to follow a Test Driven Development approach. We will first update the  test/index.js  file with all possible scenarios and then get started with the code.

Next, we are going to work in src  folder to update the cli.js  and the  lib/index.js  files.

Before we get started, let’s quickly look at the solution.

Color2Code Solution

The app we are going to build is a simple utility that takes a popular web supported color names and spits out the RGB and HEX values of the same.

We are going to maintain a collection of all the web supported colors and their HEX values. And in our code, we are going to read color name and fetch the HEX value. We then use the HEX value to generate the RGB version of the same.

Sounds simple right?

So, let’s see how to work with this solution & build a NPM.

Update project configuration

I have made a few changes to the project so that things are a bit more easy to manage.

Open  .eslintrc  and make the changes to the properties shown below

Next, we are going to update  package.json  as highlighted below

Finally, update  .travis.yml as shown below

That is all!

Test Driven Development – TDD

To work with the code base we have just scaffolded, we are going to use TDD or Test Driven Development. We are going to write the test cases first and then develop our code.

TDD is not for the weak hearted. Seeing your application fail even before writing a single line of code takes courage.

You can read more about TDD here: Test Driven Development by Example

Write Test Cases

To get started, we are going to write the test cases first and see our app fail and then start making the required changes to our code to fix it.

Open  test/index.js and update it as shown below

The above file outlines 5 scenarios

  1. If there is no color name provided to our NPM
  2. If one color name is provided to our NPM
  3. If more than one color names are provided to our NPM
  4. If an invalid color name is provided to our NPM
  5. If an invalid color name is provided along with valid color names to our NPM

The code we are writing should satisfy the above 5 scenarios.

Now, let’s run the NPM and see it fail.

Run the NPM

We are going to use nodemon to watch our app for changes and rebuild our application.

To install nodemon, run

Once nodemon is installed, from inside color2code  folder, run

If the command ran successfully, we should see an error blob stating that tests have failed.

Yay! we did it.

Now, let’s get cracking on fixing these cases.

PS: Keep the nodemon running in the terminal/prompt. As we keep making changes and saving files, the tests will be executed again.

Develop the NPM

First, we are going to get a dump of Web colors. Thanks to htmlcolorcodes.com, we have list of 148 color names.

Create a file named  colors.json inside  src/lib folder and update it as shown below

Save the file and you should see that nodemon  has detected changes to the src  folder and is running the prepare  task again. The tests are still failing.

Next, inside  src/lib folder, create a file named  utils.js and update it as shown below

On Line 1, we have required the colors hash.  hexToRGB() takes a HEX code and returns the RGB hash for the same.  processColor() takes in the color name and returns the final result.

Save the file and still the tests do not pass.

Awesome! let’s continue

Update  src/lib/index.js as shown below

Here we are processing the options and responding accordingly.

Save the file and Boom!

All the tests pass! And our NPM works!

Now, we will udpate  src/cli.js as follows

This concludes our development.

Simulate NPM install & Test

If you would like to simulate an NPM global install and test your app before publishing, you can do so by running the below command from inside the  color2code folder

And we should see something like

And from anywhere on your machine you can run

Awesome right!

Code Coverage

We have written code and we have written test cases. Now, let’s take a look at the code coverage.

From inside  color2code folder, run

And we should see something like

You can also open  coverage/index.html to view the report in the browser. It should look something like this

So we are good to move our code to Github.

Push color2code to Github

You can push your fresh NPM to git to maintain it and let other people look-see/contribute to your code. For that we will follow the below steps

  1. Create a new Github repo: https://github.com/new
  2. Run the following commands

PS: Update your repo URL as applicable

Voila! The code is deployed to Github. You can checkout my code here: arvindr21/color2code

Now that this is done, let’s setup Travis CI and Coveralls.

Setup Travis CI

Now that the repo is setup in Github, we will have a CI system in place. A Continuous Integration System keeps a tab on your repo for new changes and runs test cases on new changes and let’s you know if anything is broken.

Travis CI is a popular open source solution. You can read more about Travis CI here. We are going to use the same for managing our NPM.

If you open your Github repo, at the top of your readme, you should find a section as below

Click on “build | unknown” and you should be taken to Travis CI. In Travis CI, you should see a page like below

Activate repository.

From next time onwards when ever there is a push to the repo, the CI will kick in and check if the build has passed.

Before we trigger a build, we will setup Coveralls.

Setup coveralls

Quoting from coveralls.io/continuous-integration

Coveralls takes the build data from whichever CI service your project uses, parses it, and provides constant updates and statistics on your projects’ code coverage to show you how coverage has changed with the new build, and what isn’t covered by tests. Coveralls even breaks down the test coverage on a file by file basis. You can see the relevant coverage, covered and missed lines, and the hits per line for each file, as well as quickly browse through individuals files that have changed in a new commit, and see exactly what changed in the build coverage.

Back to our repo and at the top of readme, click on “coverage | unknown” and you should be redirected to coveralls.io. Sign in and Click on the “+” sign in the menu on left hand side and search your repo here. Once you find it, enable coveralls for this repo.

And that is all, we are done.

Trigger Build

Now, let’s go back to Travis and trigger build from “More options” menu on the right hand side. And you should be shown a popup, fill it as applicable – trigger custom build

And once the build is completed, you should see something like this

Sweet right! Our NPM works from versions 4 to 10. Yay!!

This trigger will happen automagically when ever there is a new commit/Pull request.

The final step if build passes is to update coveralls. If we navigate to coveralls for our repo, we should see something like

Travis arvindr21/color2code

We have a code coverage of 90%, which is good, considering we have only written few lines code.

We can also find out which files has what amount of coverage, as shown below

You can drill down further to analyse the issues.

With this we are done with our CI and Code coverage setup. If we go back to our Repo’s readme file, we should now see

Now, we are going to publish the color2code to npmjs.com.

Publish color2code to NPM

To push color2code to NPM, we need to have an account with npmjs.com. Once you have signed up and activated your account, you can login to the same from your command prompt/terminal.

Run

Once logged in, make sure you are at the root of  color2code folder and run

And boom your color2code  NPM is published.

If you get an error, which mostly will happen because you have used the same name as mine – color2code. You need to use another name for your module in package.json  file and then try publishing.

Your NPM is published. Navigate to npmjs.com/package/color2code to see the module we have just built.

With this our repo badges would look like

Hope you have learned how to build, publish and manage your own NPM!


Thanks for reading! Do comment.
@arvindr21

Electron, WordPress & Angular Material – An Offline Viewer

Imagine that you are at the airport, waiting for your flight and you hear a couple of guys sitting behind talking about Nodejs, Angularjs, MongoDB and you being a full stack developer and blogger start eves dropping. And all of sudden one guy mentions.. “Dude, you should totally check out this awesome blog.. It is titled ‘The Jackal of Javascript’ & that guy writes sweet stuff about node webkit, ionic framework and full stack application development with Javascript”. And the other guy says “cool..” And with his airport wifi connection starts browsing the blog on his laptop.. With in a few moments he loses interest because the pages aren’t loading.

What if the owner of the wordpress blog had an offline viewer and the first guy showed all the awesome stuff in the blog to the other guy on his laptop without any internet connection, wouldn’t that be cool?

So that is what we are going to do in this post. Build an offline viewer for a wordpress blog.

Below is a quick video introduction to the application we are going to build. 

 

Sweet right? 

So, let us stop imagining and let us build the offline viewer.

You can find the completed code here.

Architecture

As mentioned in the video, this is a POC for creating an offline viewer for a wordpress blog. We are using Electron (a.k.a Atom shell) to build the offline viewer. We will be using Angularjs in the form of Angular Material project to build the user interface for the application.

offlineviewer

As shown above, We will be using the request node module to make HTTP request to the wordpress API to download the blog posts in JSON format and persist it locally using DiskDB.

Once all the posts are downloaded, we will sandbox the content. The word sandbox here refers to containment of user interaction & experience inside the viewer. As in how we are capturing the content and replaying it back to the user. In our case, we are sandboxing the images and links, to control the user experience when the user is online or offline. This is better illustrated in the video above.

The JSON response from the WordPress API would look like : https://public-api.wordpress.com/rest/v1.1/sites/thejackalofjavascript.com/posts?page=1.

If you replace my blog domain name from the above URL with your WordPress blog, you should see the posts from your blog. We will be working with content property of the post object.

Note : We will be using recursions at a lot of places to process collections, asynchronously. I am still not sure if this is an ideal design pattern for an application like this.

Prerequisites

The offline viewer is a bit complicated in terms of number of components used. Before you proceed, I recommend taking a look at the following posts

Getting started

We will be using a yeoman generator named generator-electron to scaffold our base project, and then we will add the remaining components as we go along.

Create a new folder named offline_viewer and open terminal/prompt there.

To setup the generator, run the following

npm install yo grunt-cli bower generator-electron

This may take a few minutes to download all the once modules. Once this is done, we will scaffold a new app inside the offline_viewer folder. Run

yo electron

You can fill in the questions after executing the above command as applicable. Once the project is scaffolded and dependencies are installed, we will add a few more modules applicable to the offline viewer. Run,

npm install cheerio diskdb request socket.io --save

  • cheerio is to manage DOM manipulation in Nodejs. This module will help us sandboxing the content.
  • diskdb for data persistence.
  • request module for contacting the wordpress blog and get the posts as well as to sandbox images.
  • socket.io for realtime communication between the locally persisted data and the UI.

For generating installer for mac & some folder maintenance, we will add the below two modules.

npm install trash appdmg --save-dev

The final package.json should look like

Do notice that I have added bunch of scripts and updated the meta information of the project. We will be using NPM itself as a task runner to run, build and release the app.

To make sure everything is working fine, run

npm run start

And you should see

Screen Shot 2015-05-27 at 6.16.23 pm

Build the Socket Server

Now, we will create the server interface for our offline viewer, that talks to the WordPress JSON API.

Create a new folder named app at the root of the project. Inside the app folder, create two folders named serverclient. This folders kind of visually demarcate the server code vs. the client code. Here the server being Nodejs & client being Angular application. Inside the Electron shell, any code can be accessed any where, but to keep the code base clean and manageable, we will be maintaining the above folder structure.

Inside the app/server create a file named server.js. This file is responsible for setting up the socket server. Open server.js and update it as below

Things to notice

Line 1: We require the getport module. This module takes care of looking for an available port on the user’s machine that we can use to start the socket server. We cannot pick 8080 or 3000 or any other static port and assume that the selected port would be available on the client. We will work on this file in a moment.

Line 2 : We require the fetcher module. This module is responsible for fetching the content from WordPress REST API, save the data to DiskDB and send the response back.

Line 3 : We require the searcher module. This module is responsible for searching the locally persisted JSON data, as part of the search feature.

Line 7 : As soon as we get a valid port from getport module, we will start a new socket server.

Line 11 : Once a client connects to the server, we will setup listeners for load event and search event.

Line 13 : This event will be fired when the client wants to get the posts from the server. Once the data arrives, we emits a loaded event with the posts

Line 19 : This event will be fired when the client send a query to be searched. Once the results arrive, we emit the results event with the found posts.

Line 27 : Once the socket server is setup, we will execute the callback and send the used port number back.

Next, create a new file named getport.js inside the app/server folder. This file will have the code to return an unused port. Update getport.js as below

Things to notice

Line 2 : Require the net module, to start a new server

Line 3 : Starting value of the port, from which we need to start checking for a available port

Line 5 : A recursive function that keeps running till it finds a free port.

Line 10 : We attempt to start a server on the port specified, if we are successful, we call the callback function with the port that worked else, if the server errors out, we call the getPort()  again. And this time, we increment the port by one & then try starting server. This goes on till the server creation succeeds.

Next, create a file named fetcher.js inside app/server folder. This file will consist of all the business logic for our application. Update app/server/fetcher.js as below

Things to notice

* Most of the iterative logic implemented here is recursive.

Line 1 : We include the request module

Line 2 : We include the sandboxer module. This module will be used to sandbox the responses that we get from the WordPress API

Line 3 : We include the imageSandBoxer module. This module consists of logic to sandbox images. As in convert the http URL to a base64 one.

Line 4: We include the connection module. This module consists of the code to initialize the DiskDB and export the db object.

Line 6 : We set a few values to be used while processing.

Line 12 : The fetcher is invoked from the server.js passing in the online/offline status of the viewer and a callback. Since we are using recursions, we pass in a third argument to  fetcher()  named skip, which will decide if we need to skip/stop making REST calls to the WordPress REST API.

Line 13, 14 : We query the DiskDB for all the posts and meta data.

Line 18 : If meta data is not present, we create a new entry with the total value set to -1. Total stores the total post that the blog has.

Line 28 : We check if at least one batch of posts are downloaded and we can skip fetching from the REST API. If it is true, we check if all the posts are downloaded. If yes, we call  sendByParts() and send 20 posts at once.

Line 104 :  sendByParts() is a recursive function that send back 20 posts per second

Line 40 : If not all posts are downloaded, we send what we have downloaded so far and then call  fetcher() passing in the online/offline status, the callback and set skip to true. Now, when  fetcher() is invoked from here, the if condition on line 28 will be false. So, it will move on and download the remaining posts. Before we call  fetcher(), we will set the page value.

Line 52 : If we are loading the posts from page 1 again, we are removing all the posts and downloading again. This is as part of the POC. Ideally, we need to implement a replace logic while saving existing posts instead of removing the file and adding it again.

Line 58 : We check if the user is online and then only start making calls to the WordPress API.

Line 59 : We create a request to fetch the first page posts.

Line 67 : If it is a success, and we have more than one post in the response, we update the total post count that is sent by the API in our meta collection.

Line 76 : Once the meta collection is updated, we invoke the sandboxer(). The sandboxer() takes in a set of posts and sandboxes the URLs and content. And the sandboxed content is sent back.

Line 80 : We save the sandboxed posts to DiskDB and return the same to the UI. After that we increment the page number and call  fetcher(). As mentioned most of logic in application runs recursively.

Line 85 : If we are done downloading all the posts, we rest the page number and invoke the  imageSandBoxer(), which reads all the saved posts from DiskDB and converts the images with http urls to base 64.

Next, we will implement the sandboxer. Create a file named sandboxer.js inside app/server folder. And update it as below

Things to notice

* Most of the iterative logic implemented here is recursive.

Line 1 : We include cheerio

Line 2 : We create a few global scoped variables for recursion.

Line 9 : When the sandboxer is called, we reset the global variables and then call the  process() recursively till all the posts in the current batch are done.

Line 15 : Inside the process(), we check if a post exists. If it does, we start the sandboxing process, if not we execute the callback on line 78

Line 16 : We will access the content property on post and run the content through  cheerio.load(). This provides us with a jQuery like wrapper to work with DOM inside Nodejs. This is quite essential for us to sandbox the content

Line 19 : We sandbox the links. We iterate through each of the links and add a ng-click attribute to it with a custom function. Since I know I am going to use Angularjs, I have attached an ng-click attribute. Apart from that I remove unwanted attributes and reset the href, not to fire links by default.

Line 37 : If the anchor tag has children, I need to process them. In my blog, all the images are wrapped around with an anchor tag because of a plugin I use. I do not want that kind of markup here, where clicking on the image takes the user to the original images. So we clean that up and add custom classes to manage the cursor. All this is sandboxing links.

Line 52 :  For syntax highlighting, I am using an Angular directive named hljs. So, I iterate over all the pre tags in my content and set attribute on them that will help me work with then hljs directive. This is a typical example of integrating a third party angular directive with the sandboxer.

Line 58 : We sandbox all the iframe urls which have youtube as their src. By default all the youtube embed URL are protocol relative. They would look like //youtube.com?watch=1234. This will not work properly in the viewer, hence we will convert them to a http URL. Also, I am adding an ng-class attribute on the iFrame, whose src has youtube in it. This is to show or hide the iFrame depending on the network status as demoed in the video.

Line 69 : We sandbox the image tag, replace any additional parameters in the URL. This is specific to my blog. I have a plugin which adds this.

Line 73 : Once the sandboxing is done, we need to update the original HTML with the sandboxed version.

Line 75 : Call  process() on the next post.

To complete the sandboxing, we will be adding a new file named imageSandBoxer.js inside app/server folder. Update imageSandBoxer.js as below

Things to notice

* Most of the iterative logic implemented here is recursive.

imageSandBoxer() will be called only after all the posts are downloaded.

Line 1 : We require the connection module. This module consists of the code to initialize the DiskDB and export the db object.

Line 2 : We require cheerio

Line 5 : We require the request module. Do notice that we are setting encoding to null. This is to make sure we download image response as binary.

Line 12 : If there is no network connection, do not make calls

Line 15 : We processPost() by passing in the post and a callback. This recursively runs till all the posts are done processing.

Line 21 : We get all the images from the post and call the  sandBoxImage() by passing in one image at a time. This recursively runs till all the images are done processing.

Line 29 : If there is a valid image, get the image data. Convert the response to a base 64 format on line 32 and update the src attribute.

Line 39 : Once all images in a given post are done processing, we update the data in DiskDB and go to the next post.

Now, we will create a file named connection.js inside app/server folder. Update it as below

For DiskDB to work, create an empty folder named db inside app/serve folder.

To complete the so called server, we will create a file named searcher.js inside app/server folder. Update searcher.js  as below

Things to notice

Line 1 : We require the connection to DB

Line 6 : We fetch all the posts

Line 9 :  We run through each post’s title and content and check if the keyword we are searching for exists. if yes, we push it to the results.

Line 19 : Finally we send back the results

This completes our server.

Build the Angularjs Client

To work with client side dependencies, we will use bower package manager. From the root of offline_viewer folder run

bower init

And fill the fields as applicable. This will create a bower.json file at the root of the project. Next, create a file named .bowerrc at the same level as bower.json. Update it as below

This will take care of downloading all the dependencies inside the lib folder.

Next, run

bower install angular-material angular-route roboto-fontface open-sans-fontface angular-highlightjs ng-mfb ionicons jquery --save

If you see a message like Unable to find a suitable version for angular, do as shown below

Screen Shot 2015-05-28 at 12.50.33 am

Quick explanation of the libraries

  • jquery : DOM manipulation.
  • angular-material : Material Design in Angular.js
  • angular-route : The ngRoute module provides routing and deeplinking services and directives for angular apps.
  • angular-highlightjs : AngularJS directive for syntax highlighting with highlight.js
  • ng-mfb : Material design floating menu with action buttons implemented as an Angularjs directive.
  • roboto-fontface : Bower package for the Roboto font-face
  • open-sans-fontface – Bower package for the open-sans font-face
  • ionicons : The premium icon font for Ionic Framework.

The final bower.json would look like

Next, we will update index.html file present at the root of  offline_viewer folder.

Things to notice

Lines 7 to 13 : We require all the CSS files.

Line 16 : We require the server.js file.

Line 17 : We invoke app(), which starts the socket server on an available port. We get the used port as the first argument in the callback. We set that value on the window object.

Line 27 : We require jQuery using the module format

Lines 30 – 42 : We require all js files needed. We will create the missing files as we go along.

Line 43 : We set up the fab button. This is the search button on the bottom right hand corner of the viewer.

Note : We have not used ng-app on any DOM element. We are going to bootstrap this Angular app manually.

Next, open index.js and update as below

Do notice line nos. 15, 32 and 36.

If you want you can delete index.css present at the root. We will be creating one more inside the client folder later.

Next, we will add the missing scripts. Create a folder named js inside app/client. Inside the js folder, create a file named app.js. Update app.js as below.

Things to notice

Lines 2 – 5 : We need to manually download the socket script file as we are not of the port that the socket server would start on.

Line 11 : Once the script is loaded, we will manually bootstrap the angular application.

Line 15 : Initialize a new Angular module named OfflineViewer and add all the dependencies.

Line 17 :  We will be configuring the default theme of Angular Material, the connection url for sockets and the routes.

Line 37 : We add listeners to the online & offline event. And when the state changes, we broadcast the appropriate event. So that the viewer can behave accordingly.

Next, create a new file named controller.js inside app/client folder. Update it as below

Things to notice

Line 1 : We create a  OfflineViewer.controllers module

Line 4 :  HeaderCtrl is the controller for our application header. Here we define the status of the connectivity and are listening to  viewer-online and  viewer-offline. And we set the status variable depending on the event fired.

Line 22 :  AppCtrl is the main controller for our application, that notifies the socket server about the status of the viewer and makes requests to fetch the posts,

Line 31 : As soon as the  AppCtrl is initialized, it will check if the viewer has access to the internet. If no, it shows a dialog with the information that the user is offline and it needs some kind of internet access to download the initial set of posts.

Line 42 : We emit the load event to our server, with the status. We will talk about the  $socket a bit letter.

Line 45 : Once the socket server receives the first set of posts, it will fire the the loaded event. And the hook here would be called with the first set of posts. Here we concat the incoming posts with the existing posts and assign it to the scope variable.

Line 49 : When a user clicks on a post links, we update the main viewer with the content of the posts.

Line 53 : This is the method we have added while sandboxing the anchor tags. When the user clicks on a link, this method would be fired and it show a popup dialog, asking the user, if s/he wants to open the link. If yes, we execute line 62.

Line 68 : When ever the user goes online, we emit the load method, indicating our socket server to start fetching the posts if it has not already done so.

Line 74 : We reset the status to false if the user goes offline. This status is used to hide/show youtube videos in the content.

Line 86 : The  SearchCtrl for managing the search feature. When the user clicks on the search icon, on bottom right corner, we show the search dialog.

Line 103 : When the user enters text and clicks on search, we emit the search event with the query text.

Line 115 : Once the results arrive, we update the scope variable with results, which displays the text.

Line 110 : When a user clicks on a post, we broadcast a showSearchPost event, which is listened to on line 80 and shows the post in the main content area.

Quite a lot of important functionality.

Next, we will create a custom directive, that takes up the content of the posts and renders it. Before we render it, we need to compile it so all the attributes we have added while sandboxing will come to life.

Create a new file named directives.js inside app/client/js folder and update it as below

All we do is parse and compile the content and render it inside a div.

Next, we will add a custom filter, all it does is takes in a content and runs a  $sce.trustAsHtml().

Create a new file named filters.js inside app/client/js folder and update it as below