A CAPTCHA (/kæp.tʃə/, an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”) is a type of challenge–response test used in computing to determine whether or not the user is human.
The term was coined in 2003 by Luis von Ahn, Manuel Blum, Nicholas J. Hopper, and John Langford. The most common type of CAPTCHA was first invented in 1997 by two groups working in parallel: (1) Mark D. Lillibridge, Martin Abadi, Krishna Bharat, and Andrei Z. Broder; and (2) Eran Reshef, Gili Raanan and Eilon Solan. This form of CAPTCHA requires that the user type the letters of a distorted image, sometimes with the addition of an obscured sequence of letters or digits that appears on the screen. Because the test is administered by a computer, in contrast to the standard Turing test that is administered by a human, a CAPTCHA is sometimes described as a reverse Turing test.
This user identification procedure has received many criticisms, especially from disabled people, but also from other people who feel that their everyday work is slowed down by distorted words that are difficult to read. It takes the average person approximately 10 seconds to solve a typical CAPTCHA.
On the similar lines, we are going to build a simple Captcha, which is going to validate a form before submitting it. Here is how it is going to look like
The Captcha module that we are going to create in this post can be tightly coupled with an existing form or can be used in a generic way.
We are going to have a container that the app owner is going to provide. The captcha and the input field are going to be created inside this container.
To generate the passcode, we are going to use
Math.random() in combination with a computed scenario of uppercase or lowercase letters.
The input field will be publishing two events on keyup.
captcha.success when the validation of text box value matches the generated passcode &
captcha.failed when the values do not match. The “form owner” is going to listen to these events and work with it to get to know if the captcha is valid or invalid.
In the sample form we are building, we are going to enable or disable the submit button based on the above events.
And we are going to style the text in canvas as well to add some randomness.
We are going to setup a canvas element as shown below
catpcha.js - setupCanvas()
We are setting up a canvas with the width of the parent container and height of 75px.
The text box will be setup as shown below
catpcha.js - setupTB()
We have updated the type of input field and added a className and a placeholder. Next, we are going to bind events to the input field. The .are the events that the parent application consuming this captcha would be listening to.
On keyup, we are going to validate the passcode against the input field
In this post, we are going to build and publish a Node Package Module (NPM). The module we are going to build is a simple utility which will take a Web supported color name as an argument and spits out the HEX & RGB values for the same. We are going to use a Test Driven Development (TDD) style of coding for building this module.
Yeoman helps you to kickstart new projects, prescribing best practices and tools to help you stay productive.
To do so, we provide a generator ecosystem. A generator is basically a plugin that can be run with the yo command to scaffold complete projects or useful parts.
Through our official Generators, we promote the “Yeoman workflow”. This workflow is a robust and opinionated client-side stack, comprising tools and frameworks that can help developers quickly build beautiful web applications. We take care of providing everything needed to get started without any of the normal headaches associated with a manual setup.
With a modular architecture that can scale out of the box, we leverage the success and lessons learned from several open-source communities to ensure the stack developers use is as intelligent as possible.
As firm believers in good documentation and well thought out build processes, Yeoman includes support for linting, testing, minification and much more, so developers can focus on solutions rather than worrying about the little things.
So ya, Yeoman is like your File > New > NPM project | fill the project information and boom! A NPM project gets generated; but from command line.
To install Yeoman globally, run
➜~npm install yo--global
And to validate, run
Yeoman installed version
Before we go further, make sure you have Node.js installed on your machine. You can follow this post Hello Node for same. I am using the following versions
Node & NPM versions
In this post, we are going to use a Yeoman generator named generator-np to scaffold our base for building the NPM. To install the generator globally, run
➜~npm install generator-np--global
Once the installation is done, we are good to scaffold our app
Anywhere on your machine, create a folder named
color2code and open a new terminal/prompt there. Now, run
Scaffold a new NPM
➜color2code yo np
And you can answer the questions as shown below
This will take a moment to complete scaffolding.
Once the project scaffolded, we should see something like
$ tree -L 4 -I
In this project, we are going to follow a Test Driven Development approach. We will first update the
test/index.js file with all possible scenarios and then get started with the code.
Next, we are going to work in
src folder to update the
cli.js and the
Before we get started, let’s quickly look at the solution.
The app we are going to build is a simple utility that takes a popular web supported color names and spits out the RGB and HEX values of the same.
We are going to maintain a collection of all the web supported colors and their HEX values. And in our code, we are going to read color name and fetch the HEX value. We then use the HEX value to generate the RGB version of the same.
Sounds simple right?
So, let’s see how to work with this solution & build a NPM.
Update project configuration
I have made a few changes to the project so that things are a bit more easy to manage.
.eslintrc and make the changes to the properties shown below
// snipp snipp
// snipp snipp
// snipp snipp
// snipp snipp
// snipp snipp
// snipp snipp
Next, we are going to update
package.json as highlighted below
Now that this is done, let’s setup Travis CI and Coveralls.
Setup Travis CI
Now that the repo is setup in Github, we will have a CI system in place. A Continuous Integration System keeps a tab on your repo for new changes and runs test cases on new changes and let’s you know if anything is broken.
Travis CI is a popular open source solution. You can read more about Travis CI here. We are going to use the same for managing our NPM.
If you open your Github repo, at the top of your readme, you should find a section as below
Click on “build | unknown” and you should be taken to Travis CI. In Travis CI, you should see a page like below
From next time onwards when ever there is a push to the repo, the CI will kick in and check if the build has passed.
Before we trigger a build, we will setup Coveralls.
Coveralls takes the build data from whichever CI service your project uses, parses it, and provides constant updates and statistics on your projects’ code coverage to show you how coverage has changed with the new build, and what isn’t covered by tests. Coveralls even breaks down the test coverage on a file by file basis. You can see the relevant coverage, covered and missed lines, and the hits per line for each file, as well as quickly browse through individuals files that have changed in a new commit, and see exactly what changed in the build coverage.
Back to our repo and at the top of readme, click on “coverage | unknown” and you should be redirected to coveralls.io. Sign in and Click on the “+” sign in the menu on left hand side and search your repo here. Once you find it, enable coveralls for this repo.
And that is all, we are done.
Now, let’s go back to Travis and trigger build from “More options” menu on the right hand side. And you should be shown a popup, fill it as applicable – trigger custom build
And once the build is completed, you should see something like this
Sweet right! Our NPM works from versions 4 to 10. Yay!!
This trigger will happen automagically when ever there is a new commit/Pull request.
The final step if build passes is to update coveralls. If we navigate to coveralls for our repo, we should see something like
We have a code coverage of 90%, which is good, considering we have only written few lines code.
We can also find out which files has what amount of coverage, as shown below
You can drill down further to analyse the issues.
With this we are done with our CI and Code coverage setup. If we go back to our Repo’s readme file, we should now see
Now, we are going to publish the color2code to npmjs.com.
Publish color2code to NPM
To push color2code to NPM, we need to have an account with npmjs.com. Once you have signed up and activated your account, you can login to the same from your command prompt/terminal.
Login to NPM
Logged inasarvindr21 on https://registry.npmjs.org/.
Once logged in, make sure you are at the root of
color2code folder and run
And boom your
color2code NPM is published.
If you get an error, which mostly will happen because you have used the same name as mine – color2code. You need to use another name for your module in
package.json file and then try publishing.
What if the owner of the wordpress blog had an offline viewer and the first guy showed all the awesome stuff in the blog to the other guy on his laptop without any internet connection, wouldn’t that be cool?
So that is what we are going to do in this post. Build an offline viewer for a wordpress blog.
Below is a quick video introduction to the application we are going to build.
So, let us stop imagining and let us build the offline viewer.
As mentioned in the video, this is a POC for creating an offline viewer for a wordpress blog. We are using Electron (a.k.a Atom shell) to build the offline viewer. We will be using Angularjs in the form of Angular Material project to build the user interface for the application.
As shown above, We will be using the request node module to make HTTP request to the wordpress API to download the blog posts in JSON format and persist it locally using DiskDB.
Once all the posts are downloaded, we will sandbox the content. The word sandbox here refers to containment of user interaction & experience inside the viewer. As in how we are capturing the content and replaying it back to the user. In our case, we are sandboxing the images and links, to control the user experience when the user is online or offline. This is better illustrated in the video above.
We will be using a yeoman generator named generator-electron to scaffold our base project, and then we will add the remaining components as we go along.
Create a new folder named offline_viewer and open terminal/prompt there.
To setup the generator, run the following
npm install yo grunt-cli bower generator-electron
This may take a few minutes to download all the once modules. Once this is done, we will scaffold a new app inside the offline_viewer folder. Run
You can fill in the questions after executing the above command as applicable. Once the project is scaffolded and dependencies are installed, we will add a few more modules applicable to the offline viewer. Run,
"build":"npm run clean && npm run build-win && npm run build-linux && npm run build-mac",
Do notice that I have added bunch of scripts and updated the meta information of the project. We will be using NPM itself as a task runner to run, build and release the app.
To make sure everything is working fine, run
npm run start
And you should see
Build the Socket Server
Now, we will create the server interface for our offline viewer, that talks to the WordPress JSON API.
Create a new folder named app at the root of the project. Inside the app folder, create two folders named server & client. This folders kind of visually demarcate the server code vs. the client code. Here the server being Nodejs & client being Angular application. Inside the Electron shell, any code can be accessed any where, but to keep the code base clean and manageable, we will be maintaining the above folder structure.
Inside the app/server create a file named server.js. This file is responsible for setting up the socket server. Open server.js and update it as below
// get an unused port!
Things to notice
Line 1: We require the getport module. This module takes care of looking for an available port on the user’s machine that we can use to start the socket server. We cannot pick 8080 or 3000 or any other static port and assume that the selected port would be available on the client. We will work on this file in a moment.
Line 2 : We require the fetcher module. This module is responsible for fetching the content from WordPress REST API, save the data to DiskDB and send the response back.
Line 3 : We require the searcher module. This module is responsible for searching the locally persisted JSON data, as part of the search feature.
Line 7 : As soon as we get a valid port from getport module, we will start a new socket server.
Line 11 : Once a client connects to the server, we will setup listeners for load event and search event.
Line 13 : This event will be fired when the client wants to get the posts from the server. Once the data arrives, we emits a loaded event with the posts
Line 19 : This event will be fired when the client send a query to be searched. Once the results arrive, we emit the results event with the found posts.
Line 27 : Once the socket server is setup, we will execute the callback and send the used port number back.
Next, create a new file named getport.js inside the app/server folder. This file will have the code to return an unused port. Update getport.js as below
Things to notice
Line 2 : Require the net module, to start a new server
Line 3 : Starting value of the port, from which we need to start checking for a available port
Line 5 : A recursive function that keeps running till it finds a free port.
Line 10 : We attempt to start a server on the port specified, if we are successful, we call the callback function with the port that worked else, if the server errors out, we call the
getPort() again. And this time, we increment the port by one & then try starting server. This goes on till the server creation succeeds.
Next, create a file named fetcher.js inside app/server folder. This file will consist of all the business logic for our application. Update app/server/fetcher.js as below
// if all posts are downloaded, we will send them back
// with out making a call to the JSON server
// Feature : If you want, you can add a property to the
// meta collection, named `lastUpdate`
// > And if all the posts are downloaded, you can dispatch them
// as usual & then check the `lastUpdate` and if it is > 1 day or 1 week,
// download all the posts again to check for any updates
// clean up old posts
// make request only if we are online!
// update the found count
// sandbox the pages
// now that we have all the posts,
// we will start sandboxing the images
// this is the process of taking a image URL
// and converting it to base64
// making this app a true offline viewer!
// ALL Done!
//console.log('Sandboxing Images done!!')
// Videos are a bit complex, so left it out :D
// send only 20 posts per 1 second
Things to notice
* Most of the iterative logic implemented here is recursive.
Line 1 : We include the request module
Line 2 : We include the sandboxer module. This module will be used to sandbox the responses that we get from the WordPress API
Line 3 : We include the imageSandBoxer module. This module consists of logic to sandbox images. As in convert the http URL to a base64 one.
Line 4: We include the connection module. This module consists of the code to initialize the DiskDB and export the db object.
Line 6 : We set a few values to be used while processing.
Line 12 : The fetcher is invoked from the server.js passing in the online/offline status of the viewer and a callback. Since we are using recursions, we pass in a third argument to
fetcher() named skip, which will decide if we need to skip/stop making REST calls to the WordPress REST API.
Line 13, 14 : We query the DiskDB for all the posts and meta data.
Line 18 : If meta data is not present, we create a new entry with the total value set to -1. Total stores the total post that the blog has.
Line 28 : We check if at least one batch of posts are downloaded and we can skip fetching from the REST API. If it is true, we check if all the posts are downloaded. If yes, we call
sendByParts() and send 20 posts at once.
Line 104 :
sendByParts() is a recursive function that send back 20 posts per second
Line 40 : If not all posts are downloaded, we send what we have downloaded so far and then call
fetcher() passing in the online/offline status, the callback and set skip to true. Now, when
fetcher() is invoked from here, the if condition on line 28 will be false. So, it will move on and download the remaining posts. Before we call
fetcher(), we will set the page value.
Line 52 : If we are loading the posts from page 1 again, we are removing all the posts and downloading again. This is as part of the POC. Ideally, we need to implement a replace logic while saving existing posts instead of removing the file and adding it again.
Line 58 : We check if the user is online and then only start making calls to the WordPress API.
Line 59 : We create a request to fetch the first page posts.
Line 67 : If it is a success, and we have more than one post in the response, we update the total post count that is sent by the API in our meta collection.
Line 76 : Once the meta collection is updated, we invoke the
sandboxer() takes in a set of posts and sandboxes the URLs and content. And the sandboxed content is sent back.
Line 80 : We save the sandboxed posts to DiskDB and return the same to the UI. After that we increment the page number and call
fetcher(). As mentioned most of logic in application runs recursively.
Line 85 : If we are done downloading all the posts, we rest the page number and invoke the
imageSandBoxer(), which reads all the saved posts from DiskDB and converts the images with http urls to base 64.
Next, we will implement the sandboxer. Create a file named sandboxer.js inside app/server folder. And update it as below
* Most of the iterative logic implemented here is recursive.
Line 1 : We include cheerio
Line 2 : We create a few global scoped variables for recursion.
Line 9 : When the sandboxer is called, we reset the global variables and then call the
process() recursively till all the posts in the current batch are done.
Line 15 : Inside the
process(), we check if a post exists. If it does, we start the sandboxing process, if not we execute the callback on line 78
Line 16 : We will access the content property on post and run the content through
cheerio.load(). This provides us with a jQuery like wrapper to work with DOM inside Nodejs. This is quite essential for us to sandbox the content
Line 19 : We sandbox the links. We iterate through each of the links and add a ng-click attribute to it with a custom function. Since I know I am going to use Angularjs, I have attached an ng-click attribute. Apart from that I remove unwanted attributes and reset the href, not to fire links by default.
Line 37 : If the anchor tag has children, I need to process them. In my blog, all the images are wrapped around with an anchor tag because of a plugin I use. I do not want that kind of markup here, where clicking on the image takes the user to the original images. So we clean that up and add custom classes to manage the cursor. All this is sandboxing links.
Line 52 : For syntax highlighting, I am using an Angular directive named hljs. So, I iterate over all the pre tags in my content and set attribute on them that will help me work with then hljs directive. This is a typical example of integrating a third party angular directive with the sandboxer.
Line 58 : We sandbox all the iframe urls which have youtube as their src. By default all the youtube embed URL are protocol relative. They would look like
//youtube.com?watch=1234. This will not work properly in the viewer, hence we will convert them to a http URL. Also, I am adding an ng-class attribute on the iFrame, whose src has youtube in it. This is to show or hide the iFrame depending on the network status as demoed in the video.
Line 69 : We sandbox the image tag, replace any additional parameters in the URL. This is specific to my blog. I have a plugin which adds this.
Line 73 : Once the sandboxing is done, we need to update the original HTML with the sandboxed version.
Line 75 : Call
process() on the next post.
To complete the sandboxing, we will be adding a new file named imageSandBoxer.js inside app/server folder. Update imageSandBoxer.js as below
.content('I have noticed that you are offline, I need internet access for a while to download the posts. If you do not see any posts after sometime, launch the viewer after connecting to the internet. Prior saved posts will be accessible from the menu. ')
Line 1 : We create a
Line 4 :
HeaderCtrl is the controller for our application header. Here we define the status of the connectivity and are listening to
viewer-offline. And we set the status variable depending on the event fired.
Line 22 :
AppCtrl is the main controller for our application, that notifies the socket server about the status of the viewer and makes requests to fetch the posts,
Line 31 : As soon as the
AppCtrl is initialized, it will check if the viewer has access to the internet. If no, it shows a dialog with the information that the user is offline and it needs some kind of internet access to download the initial set of posts.
Line 42 : We emit the load event to our server, with the status. We will talk about the
$socket a bit letter.
Line 45 : Once the socket server receives the first set of posts, it will fire the the loaded event. And the hook here would be called with the first set of posts. Here we concat the incoming posts with the existing posts and assign it to the scope variable.
Line 49 : When a user clicks on a post links, we update the main viewer with the content of the posts.
Line 53 : This is the method we have added while sandboxing the anchor tags. When the user clicks on a link, this method would be fired and it show a popup dialog, asking the user, if s/he wants to open the link. If yes, we execute line 62.
Line 68 : When ever the user goes online, we emit the load method, indicating our socket server to start fetching the posts if it has not already done so.
Line 74 : We reset the status to false if the user goes offline. This status is used to hide/show youtube videos in the content.
Line 86 : The
SearchCtrl for managing the search feature. When the user clicks on the search icon, on bottom right corner, we show the search dialog.
Line 103 : When the user enters text and clicks on search, we emit the search event with the query text.
Line 115 : Once the results arrive, we update the scope variable with results, which displays the text.
Line 110 : When a user clicks on a post, we broadcast a showSearchPost event, which is listened to on line 80 and shows the post in the main content area.
Quite a lot of important functionality.
Next, we will create a custom directive, that takes up the content of the posts and renders it. Before we render it, we need to compile it so all the attributes we have added while sandboxing will come to life.
Create a new file named directives.js inside app/client/js folder and update it as below