How to automate and scrape a React web app

I’ve mentioned before that I’ve been automating and scraping websites for a long time. It’s not always pretty, but sometimes it’s absolutely necessary. In this post I want to share a few tips and tricks for automating React apps.

The Irony

There’s irony here. Often, we scrape websites because they are old and unmaintained and never had an API… we simply have no other choice. However, React is a more modern framework. Check out this timeline:

  • 2006 – Facebook releases its app platform and API
  • 2009 – Node.js is released, allowing API servers to be written in JavaScript
  • 2013 – Facebook releases React

Any website that uses React could also have been created with a convenient API for data access. In 2006, APIs were already popular enough that Facebook offered one, and then in 2009 Node.js enabled developers to use JavaScript to build API servers. This means any devs who built a React app could have also built an API server.

To be fair, most React apps probably are making HTTP requests to get things done, but that doesn’t mean there’s an API there for you to consume. The backend of a site is often built to only handle the needs of the frontend, with things like authentication and session management tightly coupling the two together. If that’s the case, it can be better to scrape the React frontend than to try to interact directly with the server.

Typical Automation

Automating a website means getting a computer to use the website as if it were a person, clicking on things and entering text and copying data from the page.

There are different tools for doing this like Selenium and Puppeteer, which have their own methods to call. We can also scrape a website without any external tools. For example, we can run a script in the browser console. We can also create a bookmarklet to load an external JavaScript file and run it. In this case, we will use plain vanilla JavaScript to move through the website and find data.

We can find elements with CSS selectors or XPath expressions.

If we want to click on a link or a button, we can use the method.

If we want to enter text into a form, we can simply find the <input> or <textarea> element and set its value: element.value = “robot user input”.

The above approach also works for selecting an option in a <select> dropdown.

However, there are unique challenges with automating React apps that make these usual approaches ineffective. React has a reactivity system (hence the name) that listens for user input events and then uses them to update its internal state data.

This reactivity system listens for specific events that aren’t fired when we change an element’s value directly or call an element’s click() method. The page may appear to respond as usual, but React’s internal state will not reflect the changes we made on the page.

Eventually, our automation will likely fail because React is not updating in response to our input.

Clicking in a React app

For many websites, using is enough to click the element from JavaScript. In fact, we can still use that method on React components to follow links and submit some forms.

However, many React components require a click event to function properly. Maybe a switch component will only toggle when clicked. Maybe a dropdown or menu will only open when clicked. In these cases, we need to trigger React’s reactivity system.

To do so, we will dispatch a MouseEvent three times, once for each type of event that fires when a user clicks an element:

  • mousedown
  • click
  • mouseup

This ensures that React will recognize the click no matter what specific mouse events it was listening for. If React was expecting all three to happen in order, we satisfy that requirement:

 * Simulates a mouse click event in the browser in a way that React will recognize.
 * @param {HTMLElement} element 
const simulateMouseClick = (element) => {
  const mouseClickEvents = ['mousedown', 'click', 'mouseup'];
  mouseClickEvents.forEach(mouseEventType =>
      new MouseEvent(mouseEventType, {
        view: window,
        bubbles: true,
        cancelable: true,
        buttons: 1,

React app mouse hover

There are times when we need to hover our cursor over an element to proceed with automation. For example, we may need to move our mouse over a dropdown component in order for a menu to appear.

On some websites, such a menu will always exist but will be hidden by CSS classes. In that case we can still click links in the menu even if they are hidden. With React, however, those menu options won’t even be rendered unless the menu is activated by a mouse hover event.

In order to simulate a hover event, we will dispatch a mousemove event in the middle of our target element. React’s event listeners will pick up this event and recognize that we are hovering over the element:

 * Simulates mouse movement across an element in a way that React will recognize.
 * @param {HTMLElement} element 
const simulateMouseHover = (element) => {
  const x = element.offsetLeft + element.offsetWidth / 2;
  const y = element.offsetTop + element.offsetHeight / 2;

    new MouseEvent('mousemove', {
      view: window,
      bubbles: true,
      cancelable: true,
      buttons: 1,
      screenX: x,
      screenY: y,
      clientX: x,
      clientY: y,

React app text entry

Entering text in a React app is very similar to how we would enter text in any website. We find the form input element we want to update, and we set its value directly.

However, React’s two-way data binding overrides the native value setter function. Without that setter function, our updates may appear in the browser but they won’t actually update the element’s value property, and React won’t pick up on them either.

For example, if we enter text into a React form and then submit that form, React won’t actually have our data. It will appear in the browser, but React will have an empty form since it didn’t “hear” the input events that should have fired.

To get around this, we need to do two things:

  1. Get the original setter function and call it to make sure the element’s value is actually updated
  2. Dispatch an input event from that element so React “hears” the change happen and pulls the element’s value into its internal state

This is what that looks like:

 * Enters text in an input field in a way that React will respond to.
 * Directly setting an input value works, but React will not copy the value to its internal model.
 * The value would disappear if the component is re-rendered, and even if it didn't it would never
 * be submitted in an API request.
 * @param {HTMLInputElement} input 
 * @param {string} text 
const simulateTextEntry = (input, text) => {
  const nativeInputValueSetter = Object.getOwnPropertyDescriptor(window.HTMLInputElement.prototype, 'value').set;, text);

  const event = new Event('input', { bubbles: true });


Automate and scrape a website with XPath Expressions

I’ve been scraping websites for a long time. One of my first “real” programming projects used Java and Selenium to automate a website and scrape data. I later used Node.js to pull data from many different education-related websites so teachers could see it all in one place.

In this post, I want to share some tips for using XPath expressions to find page elements and data. Scroll to the bottom for two helper functions that make it just as easy to use XPath expressions as CSS selectors!

Why Scrape?

In a perfect world, every site would be regularly maintained and would have an API you could use to efficiently get the data you need. In reality, we often need to use an older legacy site to access the data we need. There is no API, and the website we need to use is clunky and time consuming.

We might use web scraping to make sure people aren’t doing work that a computer can do automatically. For example, instead of having a person search for individual records manually, we can make a list of records we want to search and build an automation that finds those records. If you have multiple sources of data, you can scrape them concurrently with an automation.

In summary, automated web scraping is always more efficient than manual searching and data entry.

What is an XPath Expression?

XPath stands for “XML Path Language.” Since HTML is an XML-like language, we can use XPath to find specific HTML nodes. This is similar to using a CSS Selector to find an element on the page, but it can be much more powerful.

For example, this is how we would find the same element using a CSS selector and XPath expression:

DescriptionCSS SelectorXPath Expression
H1 Elementh1//h1
Paragraph within a sectionsection p//section//p
Anchor element with title “home”a[title="home"]//a(@title="home")
Second direct descendant list item in an ordered listol > li:nth-of-type(2)//ol/li[2]
Paragraph with exact text “lorem ipsum”impossible//p[text()="lorem ipsum"]
Paragraph containing text “lorem ipsum”impossible//p[contains(text(), "lorem ipsum")]
XPath expressions give us more power than just CSS selectors.

As you can see, XPath expressions have capabilities beyond CSS selectors. For example, XPath expressions can find an element based on its text content.

How to use XPath Expressions in JavaScript

Most developers quickly learn about the querySelector and querySelectorAll methods for querying the DOM with CSS selectors. For example:

// find a heading
const heading = document.querySelector('h1');

// find all paragraphs
const paragraphs = document.querySelectorAll('p');

Similarly, the document.evaluate function allows us to search the DOM using an XPath:

// find a heading
const heading = document.evaluate(

// find all paragraphs and log their text content
const paragraphs = document.evaluate(

for (let i = 0; i < paragraphs.snapshotLength; i++) {
  const text = paragraphs.snapshotItem(i).textContent;

document.evaluate parameters

The document.evaluate function receives several parameters. This makes it seem more complicated than the querySelector and querySelectorAll functions. This is the price we pay for the power of XPath.

I’ll be showing you a wrapper function to make XPath much easier to use. Since you won’t have to worry about the details, I’m going to skip most of them. You can always check the documentation over at MDN if you want to know more:

Search within a parent element using XPath Context Nodes

One thing you do want to be aware of is how you can search within a specific element using an XPath expression. You can do this with the contextNode parameter of document.evaluate. It is the second parameter.

In the examples above, the context node was always the document itself. However, it can be any element you provide. Calling document.evaluate with a specific context node will only return matches within that node.

This is similar to how we can call querySelector on the document itself, but we can also call it on a specific element within the document:

// searches the whole document for paragraphs
const allParagraphs = document.querySelectorAll('p');

// searches only for paragraphs inside the section element
const section = document.querySelector('section');
const sectionParagraphs = section.querySelectorAll('p');

This is what it looks like to search a specific element for descendants using XPath:

// searches only for paragraphs inside the section element
const section = document.querySelector('section');

const sectionParagraphs = document.evaluate(
  section, // <-- notice the change!

for (let i = 0; i < sectionParagraphs.snapshotLength; i++) {
  const text = sectionParagraphs.snapshotItem(i).textContent;

Mix and Match XPath Expressions and CSS Selectors

You can use XPath expressions and CSS selectors in your code. Both of them return references to DOM elements, and both of them use those same element references to search for descendant nodes.

Simple document.evaluate wrappers

Using document.evaluate is more verbose than using a CSS selector. However, we can simplify things by creating a wrapper function for document.evaluate that mimics querySelector and querySelectorAll.

Find one element by XPath easily

You can use this function to easily find a single document node using an XPath. This is similar to using the querySelector function:

* Shorthand for calling `document.evaluate` to get a single element via XPath.
* @param {*} xpathExpression 
* @param {*} contextNode 
* @returns 
const getNodeByXpath = (xpathExpression, contextNode = document) => document.evaluate(

// From root of document
const heading = getNodeByXpath('//h1');

// From specific ancestor element ("context node")
const section = document.querySelector('section');
const sectionHeading = getNodeByXpath('//h2', section);

Find multiple elements by XPath easily

You can use this function to easily find multiple document nodes using an XPath. This is similar to using the querySelectorAll function:

* Shorthand for calling `document.evaluate` to get multiple element via XPath.
* Returns an array of nodes, which may be empty.
* @param {*} xpathExpression 
* @param {*} contextNode 
* @returns 
const getNodesByXpath = (xpathExpression, contextNode = document) => {
  const result = document.evaluate(

  const nodes = [];
  for (let i = 0; i < result.snapshotLength; i++) {
    const node = result.snapshotItem(i);
  return nodes;

// From root of document
const paragraphs = getNodesByXpath('//p');

// From specific ancestor element ("context node")
const section = document.querySelector('section');
const sectionParagraphs = getNodesByXpath('//p', section);


For help building your own XPath expressions, check out this cheatsheet:

Use JavaScript to get original dimensions of an image file

Most of the time HTML images can take care of themselves. By default, an <img> element will load an image file and display it at its natural width on the page. If the file is 200px wide and 400px tall, that’s how big it will be when it loads.

If we need to adjust the size of the image, or how it is cropped or stretched or otherwise positioned, we can use CSS.

However, there are still times when we need to use JavaScript to manually work with an image, and often we need the image’s natural dimensions to do so. For example, we may need to paint the image on an HTML canvas.

Getting this information is easy, as the HTMLImageElement API includes a width and height property that expose the natural dimensions of the image file. Let’s see those properties in action in JavaScript:

const myImage = document.querySelector('')
const { width, height } = myImage
console.log(`The image is ${width}px wide and ${height}px tall.`)

Determine any image file’s size

There are cases when we need to know the size of an image that hasn’t been loaded yet. When we first load a web page, the HTML is available before any image data has loaded. The HTML is parsed into the DOM, and the page is rendered. Many times we see the blink where the page has rendered with no image and the image finishes loading later.

This delay can cause problems if we need to use JavaScript to determine the size or behavior of elements around the image. Before the image has loaded, its naturalHeight and naturalWidth are both zero. If our code tries to lay out a web page based on the size and aspect ratio of the image before it loads, it will treat the image as if it had no size at all.

To get around this, we have to wait for the image data to be available in the browser. Once the image is loaded we can execute our JavaScript to resize and reposition elements based on the image.

Here’s an example:

// Returns the original dimensions of an image file via callback
function getImgSize(imgSrc, callback) {
  const newImg = new Image();

  newImg.onload = function() {
    const height = newImg.height;
    const width = newImg.width;
    callback({ width, height });

  newImg.src = imgSrc; // this must be done AFTER setting onload

This function takes the URL of an image (imgSrc) and waits until the image has loaded. At that point, the callback function is called with the image’s natural width and height.

To accomplish this, we first create an HTMLImageElement. This is the same element that is created in the DOM by an HTML <img> tag.

HTMLImageElements have a width and height property. If a width and height attribute exist on an <img> element in HTML, those values will be returned. That means they are not the natural width and height of the image data being displayed. However, if an image does not have a width and height attribute manually set, it will return the natural width and height of the image data.

It is important that we wait for the image data to load in the browser. We do this by listening for the onload event. This event fires as soon as the image data is available in the browser. We have to be careful here – the image data might already be cached in the browser from a previous request, meaning the onload event will fire as soon as the image has an src. To ensure our onload callback is actually called, we have to add it before we give the image an src, which triggers the image data to load (from the browser cache or if necessary from a new request).

Other Applications

Another way we can take advantage of the onload event listener is to preload multiple images in sequence without overwhelming the connection to the server. For example:

// This is a Promise-based async version
const loadImage = (imageSrc) => new Promise(resolve => {
  const image = new Image();
  image.onload = () => {
    const height = image.height;
    const width = image.width;
    resolve({ image, width, height });
  image.src = imageSrc;

const imageUrls = ['image1.png', 'image2.png', 'image3.png'];

const run = async () => {
  for (const imageUrl of imageUrls) {
  	const { image, width, height } = await loadImage(imageUrl);
    // do something with `image`, `width`, and `height`


How to determine which JavaScript file is running

In programming, context is important. Part of that context is the currently executing script file.

For example, I might need to know which directory the current JavaScript file is running in so I can refer to other resources using a relative path. Relative paths are more convenient to work with than typing out absolute paths for everything. Also, if I were using absolute paths and I moved a file or directory in my code project, I would have to update all of the absolute paths inside. So there are valid reasons for wanting to know which script file we are currently in.

Node.js makes it extremely easy to determine the current file or directory with __filename and __dirname, respectively.

Fortunately, it’s not too difficult to find the same information in the browser using document.currentScript. This property returns the <script> element that is currently being executed.



There are a few caveats to using document.currentScript. First, the property only references its containing <script> element if the code is executing synchronously. That means we can’t use it in callbacks and event handlers. Fortunately, this is easy to get around:

const currentScript = document.currentScript

function myCallback() {

setTimeout(myCallback, 1000)

By saving a reference to the script file when the code runs synchronously, we are able to reference the document.currentScript property later in our asynchronous code.

Another caveat is that document.currentScript doesn’t work in JavaScript modules. That’s okay. We can just use the import.meta property there instead.

Internet Explorer

document.currentScript is supported by all modern browsers, but doesn’t have support in Internet Explorer. You can still use this snippet to achieve the same effect:

var currentScript;
if (document.currentScript) {
  currentScript = document.currentScript
} else {
  var scripts = document.getElementsByTagName('script')
  currentScript = scripts[scripts.length - 1]
console.log('Script located at: ' + currentScript.src)

We will query for all script elements on the page and then grab the last one by its index. With few exceptions, the currently executing script will be the last script element added to the DOM, as any later script elements haven’t been loaded yet.

One rare exception where this won’t work is if you dynamically append a script within the DOM, such as in the head of the document, after page load. If there were script elements in the body of the HTML document, the last one would be returned instead of the currently executing script that was added later.

The good news is that if you are manually appending scripts to the DOM, you can ensure those dynamically added scripts are being added at the end of the body where this script will successfully identify them in Internet Explorer.

Load JavaScript files dynamically

Usually when we need to include a JavaScript file on an HTML page we just do this:

<script src="the-javascript-file.js"></script>

And with modern JavaScript maybe we throw an async or defer attribute on that script tag for a little extra performance. Better yet, we could set type="module" to use the JavaScript module system.

If we are using JavaScript modules, we can include other JavaScript module files directly by using an import statement:

import otherModule from '/other/module.js'

However, there are times when none of these options are available. For example, if we don’t have access to edit the original HTML markup being served, we are forced to load JavaScript dynamically.

Real world use cases for this include bookmarklets and web extensions.

Loading JavaScript dynamically

A <script> element can be created and appended to the DOM just like any other HTML element. For example:

const script = document.createElement('script')
script.src = '/my/script/file.js'

Once a script element has been appended to the DOM, it will be executed. This means that inline scripts will have their contents interpreted and executed as JavaScript just as we would expect if they had been part of the HTML when it was first loaded. Similarly, external script files will be loaded and executed.

Here’s an inline example:

const inlineScript = document.createElement('script')
script.innerHTML = 'alert("Inline script loaded!")'

As you can see, it’s easy to create and append new script elements, allowing us to include any number of external JavaScript files dynamically after a page has loaded.

Determining when a JavaScript file is loaded

The real challenge isn’t loading the file – it’s knowing when the file has finished loading. For example, maybe we have code that uses a library like jQuery or AngularJS or Vue (listed in order of ancientness, not preference). We need to make sure the library is loaded before we execute our own code, otherwise our code will break.

We could do something silly like call setInterval and continually check if the library has loaded by looking for its global window variable:

const jqueryScript = document.createElement('script')
jqueryScript.src = ''

const jqueryCheckInterval = setInterval(() => {
  if (typeof window.jQuery !== 'undefined') {
	// do something with jQuery here
}, 10)

However, this code is ugly and wastes resources. Instead, we should listen directly for the script element to fire its onload event:

const jqueryScript = document.createElement('script')
jqueryScript.src = ''
jqueryScript.onload = () => {/* do something with jQuery */}

We’ve already cut the size of our code in half, making it much easier to read and work with. It’s also slightly more performant.

The code would be even easier to read if we used Promises, which would allow us to chain multiple scripts together to load one after the other. Here’s a function we can use:

 * Loads a JavaScript file and returns a Promise for when it is loaded
const loadScript = src => {
  return new Promise((resolve, reject) => {
    const script = document.createElement('script')
    script.type = 'text/javascript'
    script.onload = resolve
    script.onerror = reject
    script.src = src

Notice we have also introduced error handling by listening for the script element’s onerror event.

Here’s what the script looks like in action:

  .then(() => loadScript(''))
  .then(() => {
    // now safe to use jQuery and jQuery UI, which depends on jQuery
  .catch(() => console.error('Something went wrong.'))

Keeping Things Fresh

The above script works great for libraries and modules that never change, such as those loaded from a CDN. Once the script is loaded, the browser will automatically cache it. The next time the script is needed, the browser will reuse the copy it saved earlier. This saves bandwidth and makes the page load faster.

This built-in behavior is actually a problem for scripts that change. For example, you might want to create a bookmarklet with just enough code to load an external script, allowing that script to do all of the heavy lifting. That script might change over time as you add new features. If you use the above loadScript function, those new features might not show up because the browser has already cached your script, and it now reuses that cached version instead of checking your server.

To ensure your script is actually loaded from your server each time, you can add a meaningless query value to the end of the script URL. As long as this value is different each time the script is loaded, it will cause the browser to treat the URL as a new resource and load it directly from the server each time.

Here’s what that can look like in code:

 * Loads a JavaScript file and returns a Promise for when it is loaded
const loadScriptNoCache = src => {
  return new Promise((resolve, reject) => {
    const url = new URL(src)
    url.searchParams.set('random', Math.random())
    const script = document.createElement('script')
    script.type = 'text/javascript'
    script.onload = resolve
    script.onerror = reject
    script.src = url.toString()

Dueling with dinosaurs

If you don’t have access to the original HTML source of the page you’re working with, there’s a chance you’re facing other limitations as well. For example, you could be forced to work with Internet Explorer.

IE may be old and behind the times, but thankfully we can accommodate it with just a few modifications. First, we need to drop the Promises API and go back to using callbacks. Second, we need to account for IE’s unique way of handling script load events. Namely, IE doesn’t fire an onload event, but it does give scripts an onreadystatechange event just like XMLHttpRequests.

Here’s the callback-based version that works with Internet Explorer as well as other browsers:

 * Plays well with historic artifacts
function loadScript(src, callback) {

  var script = document.createElement('script')
  script.type = 'text/javascript'

  // IE
  if (script.readyState) {
    script.onreadystatechange = function () {
      if (script.readyState === 'loaded' || script.readyState === 'complete') {
        script.onreadystatechange = null
  // Others
  else {
    script.onload = callback

  script.src = src

Minimal Golang Heroku App

Heroku’s documentation for its Golang buildpack isn’t up to date, and it’s unclear which files we really need to get started. I’m going to lay out the bare essentials for creating a minimal Golang Heroku app. I will explain the purpose of each and every file so we can assert that we do in fact have a bare-bones starter project for Go on Heroku.

History of the Heroku Go buildpack

Heroku supports many languages with its list of official buildpacks. One of the platform’s more recent additions was the official Go buildpack.

Before there was an official buildpack for Golang, we could still create Go apps on Heroku using a custom buildpack created by a member of the community. Unfortunately, this was a more complicated option and had no official support from Heroku.

Since we can now use the official Heroku Go buildpack, we should take that approach. However, Heroku’s Go buildpack documentation and coordinating demo app are still set up to use the old custom buildpack. For example, they still have a Dockerfile and Makefile. With the new buildpack, we don’t need to manually interact with Docker or Make!

Heroku’s Go demo app also has a /vendor folder for dependencies. The Go language has evolved and there is an official tool for managing dependencies: Go Modules. The official Heroku Go buildpack supports Go Modules, so we no longer need to use a 3rd party tool for vendoring.

Basically, Heroku’s Go buildpack is far ahead of its documentation. This leads to a lot of dead code that we would be blindly copying if we started from Heroku’s old example project. Let’s cut that out and see how little it takes to get up and running on Heroku with Golang.

Setting up

You will of course need to install Go on your machine.

To follow along, you will also need to install the Heroku Toolbelt. This will allow you to use the Heroku CLI to set up and deploy our minimal Golang app.

Since the Heroku CLI uses Git as part of its deployment process, you will also need to have Git installed. Windows users can install Git Bash.

Create a Golang web server

Start by creating your project directory:

mkdir minimal-golang-heroku-app

Within that directory, create a main.go file:

package main

import (

func main() {
	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprint(w, "Hello World!")

	http.ListenAndServe(":80", nil)

This is the minimum viable Go web server. All we are doing is listening for a request to the root path and responding with “Hello World!” If you run the code, you can open you browser to localhost to see the “Hello World!” response come back:

go run main.go

Set up a Git repository

We will deploy our project to Heroku by using Git, so let’s initialize it as a Git repository and commit our changes.

Run these commands inside the project folder:

git init
git add .
git commit -m "Initial commit"

Create a Heroku app

Now that we have a web server, let’s turn it into a Heroku app. We will set things up using the Heroku CLI while in the project directory:

heroku create

That command does two things. First, it creates an empty application on Heroku. It also creates an empty Git repository on Heroku to go along with the app. Finally, if your local project is is a Git repository, this command will set the new Heroku repository as a remote named heroku.

You will see output like this:

$ heroku create
Creating app... done, ⬢ evening-ocean-54721 |

If you check the list of apps on your Heroku dashboard after running that command, you will see the new app has been created.

Push to Heroku

The only step left is to actually push the code from our computer up to our Heroku app so we can see it on the web. However, there is a catch.

Try pushing your app up to Heroku. Remember you need to set the upstream branch in Git since this is your first push:

git push -u heroku master

Heroku will reject your code at this point with these error messages:

No default language could be detected for this app.
HINT: This occurs when Heroku cannot detect the buildpack to use for this application automatically.

In order to build and run your code, Heroku needs to know which buildpack to use. We can manually set the buildpack using the CLI, the GUI, or an app.json file. If we don’t manually set a buildpack, Heroku will look for clues to determine which language to use, but it will stop if can’t figure it out.

I prefer for Heroku to intuit the correct buildpack, as it saves me a step (and potential troubleshooting) when deploying an app for the first time.

Initialize a Go module

Fortunately, it’s easy to signal to Heroku that this is a Golang app. We just need to initialize it as its own Go module:

go mod init

This will create a go.mod file, which declares that the directory is a Go Module. Technically, you can name your module whatever you want. However, it’s convention to name a module to match its Git repository.

Before we install any dependencies, our go.mod file is very minimal:


go 1.12

As you may have guessed, Heroku will look for a go.mod file in new projects to see if it should use the Go buildpack. Let’s commit our go.mod file and try pushing to Heroku again (we still need to set our upstream branch since our last push was rejected):

git add go.mod
git commit -m "Added go.mod"
git push -u heroku master

This time, Heroku will accept our commit. It will successfully build our minimal Golang application and run it as a dyno.

Unfortunately, we have one more problem to solve. If you try to view your application online (you can find the URL in the git push output or on your Heroku dashboard), you’ll see Heroku’s “Application Error” screen instead of our “Hello World!” message.

Using Heroku environment variables

This is an easy fix. Since Heroku apps share a machine with other apps, they are assigned an arbitrary port to listen on. In other words, our app isn’t going to be listening on port 80, which is what it tries to do by default. It also won’t be listening to the same port each time it is started (and Heroku does love to restart its dynos).

To get the PORT environment variable from Heroku, update main.go:

package main

import (

func main() {
	http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) {
		fmt.Fprint(w, "Hello World!")

	http.ListenAndServe(":"+os.Getenv("PORT"), nil)

We are importing the os package at the top, then using os.Getenv("PORT") to read the PORT environment variable. Notice we add a colon (:) in front of the port to get the correct syntax.

Let’s test our app by passing in a PORT value:

PORT=8080 go run main.go

If you visit localhost:8080, you will see the app running.

Now make one final commit and push:

git add main.go
git commit -m "Use PORT"
git push

And with that, Heroku will accept our changes, successfully build our app, and finally deploy it to the web for all to see.

Final notes

You can see the complete example on my GitHub.

One thing we didn’t do is install dependencies. Heroku doesn’t require any extra steps here – if you know how to work with Go Modules, you already know how to manage dependencies on Heroku.

Use JavaScript to condense HTML markup and remove extra space

This technique is a little bizarre and probably doesn’t have many direct applications. I’m going to share it because it lets me talk about regular expressions, and it’s food for thought about other ways to use them.

The problem

We have a DOM element that we want to serialize (basically, we want to store it as a string). The problem is, there is a ton of extra white space in the element’s HTML markup. If we stored that white space in a JavaScript variable we would be wasting resources. If we have to store many elements like this, the problem is even bigger!

We need a way to remove space from a string, but not just any space. We only want to remove white space characters that occur between HTML elements.

<p>We want to keep all space between these words.</p>

<p>We want to remove the blank line above this paragraph.</p>

<span>Also, </span>          <span>the space between these two spans</span>

A simple solution

Here’s the answer:

function condenseHTML(elem) {
    const html = elem.outerHTML
    const condensedHtml = html.replace(/>\s+</g, '><')
    return condensedHtml

Let’s walk through this:

  • Our function accepts a single argument: an HTMLElement.
  • We then get the HTML that represents this element and its children using the element’s outerHTML property.
  • Next, we use a regular expression to execute a search and replace operation.

Let’s talk about that regular expression:

  • The regular expression looks for one or more white space characters (\s matches white space characters, + matches one or more).
  • Specifically, we are looking for white space between > and < characters. In other words, white space that appears after the end of one HTML element and before the beginning of the next. Notice that we are matching the > and < as well.
  • Also notice that the regular expression has the g flag at the end, which stands for “global”. This means it can find multiple matches in the same string.
  • We will then replace all the matches by using the string’s replace method.

Since our regular expression matched the > and < characters, they will also be replaced. That’s why we are replacing each individual match with ><, as otherwise the angular brackets would be lost. Now the HTML elements remain valid but there will no longer be space between them.


This approach isn’t perfect. Technically, we are removing space between elements that really should be there. See this example:

<span>We still need</span>         <span>space here.</span>

The space between these two spans will all be removed, which means the words “need” and “space” will be smashed together when viewed in the browser. To be extra safe, our solution could replace multiple spaces with a single space instead of replacing them all with no spaces.

Remember that in HTML, multiple spaces usually condense down to a single visual space (unless the element’s style has white-space set to pre, pre-wrap, or break-spaces).

Extended learning

If you’d like to get more practice using regular expressions, I recommend checking out

If you’re interested in further optimizing HTML serialization, try using a DOMParser to parse the HTML source into a live Document. You can then use the context of each Element and Text node to determine which spaces are safe to remove.

Image transparency with CSS

Pro tip: you can make the white pixels in your image files look transparent using just CSS!

To do this, use the mix-blend-mode property:

img.clear-white {
  mix-blend-mode: multiply;

That snippet will cause all images with the “clear-white” class to display as though their white pixels were transparent.

This technique is especially useful if you are working in an environment that doesn’t have great tools for controlling transparency in the actual image files.

The mix-blend-mode property is currently supported by Chrome, Firefox, and Safari. Since Microsoft has adopted the Chromium project for Edge, maybe we will see support in Edge soon.

For more great info on CSS mix-blend-mode and how to use it, check out this article on CSS Tricks.

Read console input with Node.js

When people first learn a programming language, they often start by creating small programs that run in the terminal (a.k.a. “console” a.k.a. “command prompt”). The program will give some text output, then the user will type some text input, then the program will read it and give some more output.

One such exercise could be: create a program that asks the user for their name, allows them to type their name, then greets them by name.

There are many programming tutorials that teach Python and Ruby and other scripting languages, and many of them start here, with simple console programs. On the other hand, Node.js tutorials usually start by creating a simple web server. That’s super cool, but it means most Node.js developers aren’t very familiar with how the terminal works in their own programming language.

Interacting with the terminal

First, we need a way to write to the terminal and also a way to read data that was entered in the terminal:

stdin is the standard input stream, and stdout is the standard output stream. This pattern exists across programming languages. To simplify, these streams allow data to pass in and out of the program through the terminal.

Here’s a one-liner example of how to write to stdout:

process.stdout.write('Hello world!')

And here’s a simple example of how to read from stdin:

process.stdin.once('data', data => {
  console.log('Data entered: ' + data)

We added a one-time event listener for a 'data' event from the stream, which happens when someone types input and presses Enter.

Since stdin is a Readable stream, it begins in paused mode. Adding a 'data' listener switches the stream to flowing mode. When stdin is in flowing mode, it listens for data from the terminal.

Once the 'data' event triggers, we can work with the input from the console. In this case, we will just log some output to the console. Notice how the global console instance is already configured to write to stdout!

Also notice how we had to call process.exit() to exit the program. Since we left the process.stdin stream open, the program was still listening for user input. Calling process.exit() is one way to terminate the program. A more proper approach would be to close the stream by calling process.stdin.pause() in the same place.

Keeping things organized

Since Node.js is asynchronous, things are going to get a little weird. When you ask the user for input, Node.js doesn’t wait for them to type something. It just keeps on working. So it’s important that we keep things as organized as possible so we don’t lose track of what’s happening.

Fortunately, the built-in readline module simplifies things for us:

const readline = require('readline')

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout

rl.question('What is your name?', nameAnswer => {
  console.log(`Nice to meet you, ${nameAnswer}.`)

Once we create a readline interface, we don’t have to worry about using process.stdin and process.stdout directly.

Notice that readline uses callbacks. If we want to ask several questions in sequence, we have to juggle those callbacks somehow:

rl.question('What is your favorite color?', colorAnswer => {
  console.log(`I like ${colorAnswer} too.`)

  rl.question(`What shade of ${colorAnswer} is best?`, shadeAnswer => {
    console.log(`Wow, ${shadeAnswer} is also my favorite!`)

Of course, JavaScript has a solution for this. We can use Promises to avoid nesting callbacks ad nauseam:

const question = prompt => {
  return new Promise((resolve, reject) => {
    rl.question(prompt + '\n', resolve)

Conveniently, we have abstracted away all interaction with the standard input and output streams. That leaves you to focus on the higher-level concern of what text should display and what to do with the text typed by the user. We have also abstracted away the pyramid of callbacks, so we can more easily see line-by-line how the terminal interaction will go.

Putting it all together

Here’s a final complete example of our setup, which is inside an async function so we can use await to inline our Promises:

const readline = require('readline')

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout

const question = prompt => {
  return new Promise((resolve, reject) => {
    rl.question(prompt, resolve)

(async () => {
  const nameAnswer = await question('What is your name?')
  console.log(`Nice to meet you, ${nameAnswer}.`)
  const whereAnswer = await question('Where are you from?')
  console.log(`I hear it's nice in ${whereAnswer}.`)

Read a file “upload” with JavaScript

In my last post I showed how to create file downloads with JavaScript.

That’s only part of the picture. If we want our users to be able to continue working with their data after they’ve downloaded it, we also need to support file uploads.

Why not use another storage option?

The specific use case we are talking about is a project where we don’t have access to any sort of backend server or database, but we still need a way for our users to save their work. One option is to just store the data in the browser, but all of our in-browser options have significant limitations:

  • If we used session storage, the data would disappear as soon as the browser was closed. In most cases, we need the data to persist between browser sessions.
  • We could use HTTP cookies to store data, but cookies have a maximum size of 4096 bytes per domain. When it comes to storing data, that’s probably not enough. Also, since cookies are transferred to the server with every request, we would be bogging down our interactions with the server. Wait! There is no server! In that case, our users are likely viewing our HTML file using the file:// protocol, which means cookies can’t be set in the first place.
  • We could step up our game with local storage or IndexedDB, which both have reasonable size limits and can store data across browser sessions. The major problem here is that they are browser-specific. Our users can’t use their data in other browsers on other computers. They also can’t export their data to be used in other programs.

If we allow users to just save data to the filesystem, none of the above drawbacks apply. Files don’t disappear when the browser is closed or the user clears their cookies. There is practically no size limit. Most importantly, files are portable and can be used in other browsers, on other computers, and even in other applications.

Reading an uploaded file

In order to read an uploaded file, we will first need an HTML form to which we can attach our “upload.” Of course, we aren’t really uploading anything since we don’t have a server. We just need a file <input> element to let our JavaScript read a file from the local filesystem.

<input type="file" id="file-to-read">

Well that was about as easy as it gets. That one-liner will allow us to select a file on our computer to “upload.” Now let’s add a button that we can click to initiate reading the selected file.

<button onclick="readFileAsText()">Load Selected File</button>

Piece of cake. This is just a regular button with an event listener. When we click it, the readFileAsText function is called.

Let’s finish up by defining that function:

  const readFileAsText = function() {
    const fileToRead = document.getElementById('file-to-read').files[0]
    const fileReader = new FileReader()
    fileReader.addEventListener('load', function(fileLoadedEvent) {
      const textFromFileLoaded =

    fileReader.readAsText(fileToRead, 'UTF-8')

Step by step, in English

  • The first thing we need to do is get a reference to the file that the user selected. We do that by finding the file input, in this case by its ID. Then we access its files property. Since a file input can allow multiple upload files to be selected, the files property is an array. We only care about the first item, so we will grab the item at index zero.
  • We will then create a FileReader, which allows us to asynchronously read data from a File or Blob.
  • It is important to realize what “asynchronous” means here. It means the FileReader will mind its own business while it is reading and won’t block the rest of our code from running. We need to add an event listener for its load event so we can work with the result when it is finished reading.
  • In our event listener we need to reference the FileReader because it now has a populated result property. We can always do this by using (Since this particular example has a traditional function and not an arrow function we could also use this to refer to the FileReader.)
  • We are now free to do whatever we want with the contents of the file, which are a string. In this example, the file contents are logged to the console.
  • Remember that we haven’t actually told the FileReader to read the file yet – so far we have only told it what to do when it finishes reading a file. It is important to attach event listeners first. However, in order to get the FileReader to read a file in the first place, we need to call its readAsText method on the file from our <input>. This method causes the result to be a string. The FileReader class has other methods that cause result to be different types of data.

Managing the data

If you are going to allow your users to save and load data from your app, I recommend serializing that data in JSON form. If you do, the data will have a clear structure and will be easy to deserialize and load back into your application. It will also be easier for other applications and custom scripts to consume the data.