Go Tips – Logging Into a Website

Web scraping is a very common task these days, Be it checking stock prices or seeing if there is a new book to read, there are loads of uses for it.

I’ve been tinkering with Golang (Go) for a few weeks, I have found the standard library is packed with useful tools and bits to help with retrieving data from online sources.

Logging into a website is sometimes required to gain access to certain material or other sections of the website, I thought I would write up a quick example on how to do this.

It is rather simple so the advanced users will probably already know this, hopefully it is useful for someone though.

Disclaimer: Some websites don’t support this, they usually have some alternative way to login, generally an API interface.

Lets break this down into steps

  1. Create a cookie jar to store your cookies and current session information.
  2. Create a http client that uses the cookie jar from Step 1.
  3. Send your username / password / other information.
  4. Done! You can now request information from pages that require you to be logged in.

Alright, sounds pretty easy but lets see some code!

First of all, the imports.


1 – Create the cookie jar

2 – Create the HTTP client

3 – Send your login information

The “username” and “password” sections should match the form name of the HTML elements. Usually they are username and password but some websites choose different ones.

You can find those by inspecting the source, finding the correct element and checking what its name param is. eg name=”username”

4 – Use the client

Now that you are logged in you can request information from the website with your client instance.

It’s up to you on how to use the client instance, maybe you want to use client.Get()  or client.Do()

Here are some useful links to help you along.



Working Example

Useful links