Simply Explained : Multithreading

John and Mary both walk into work. Let’s consider both John and Mary to be a “thread”. We’ll define a “thread” to simply be a sequence of consecutive actions. In this case, let’s say that John is a “thread”, and Mary is also a “thread”, because they can both execute actions in parallel. If John is working on a spreadsheet, Mary does not have to wait for John to finish before she can do her own tasks. They are both independent entities.

Now, John loves donuts. Fortunately for John, his company always stocks donuts in the company fridge. For John, his thought process is quite simple, and looks like this :

if John sees at least one donut in the fridge
    then John washes his hands
    then John grabs a donut and eats it

And this is fine and dandy. When John is alone, this course of logic always works. He sees a donut in the fridge, he washes his hands, and he eats it. It always works without fail.

However, one day, Mary walks into the room. And Mary also likes donuts. When John looks into the fridge and sees that there is exactly one donut left, he hurriedly goes to wash his hands. However, when he returns, he finds that the donut is gone! But that shouldn’t have been possible. John had checked that at least one donut existed before he decided to wash his hands to eat it!

So what happened? It turns out that Mary had taken the donut in the time that John had spent washing his hands. This is the problem with multithreading, which is the act of programming with more than one thread.

 


The Problem with Multithreading


While it may be true that John’s logic always succeeds when he is alone (single-threaded), the same cannot be said when Mary is with him (multithreaded). This is because when objects are shared between multiple threads, multiple different threads are able to modify the object at the same time. In this case, the fridge is the object, and both John and Mary are able to modify the contents of the fridge at any time.

This is a huge problem, because now, functions that had been unit tested or proven to work on a single thread no longer work when multithreaded. As shown above, doing secure checks like putting in defensive if statements is entirely ineffective, because the object can be modified right after your if statement is completed due to the fact that multithreading allows everything to be done in parallel.

So how can John prevent Mary from modifying the contents of the fridge until he is done using it? The answer is fairly simple — when John accesses the fridge, he needs to make sure that he is the only one who has control of it until he is done whatever he is doing. There are multiple ways to do this, but the most common is to use something called a “lock”.

This new multithreaded system with locks has three simple rules :

1. Whoever controls the lock will have sole control over the object for as long as they possess the lock.
2. An object should only have one lock.
3. Any thread waiting on a locked object can either throw an exception or wait for it to be unlocked. In which case, it will be passed by some priority order (usually first come first serve, like a queue).

Let’s observe why these three rules are true.

 


Rule Number One


“Whoever controls the lock will have sole control over the object for as long as they possess the lock.”

The first rule is true because that is how a lock is defined. But for a more intuitive approach, you can think of the lock as “binding” itself to its owner. As long as the owner of the lock does not relinquish control, the object will forever be locked. This also means that if a thread controlling a locked object gets stuck in an infinite loop, that locked object will be locked forever.

Of course, while you could manually unlock the object, there is no way to determine if an object will be locked forever, because you can’t (in general) determine whether or not a thread is stuck in an infinite loop (see Halting problem).

 


Rule Number Two


“An object should only have one lock.”

Suppose an object could have two separate locks that give complete access over the object. That means that two different threads can access the object at the same time. Unfortunately, this is essentially no different than not having a lock at all, because now the object can be freely edited at any time by anyone. We are back to the exact same problem that we initially started with!

This means that an object must have one and only one lock, and only one owner for that lock.

 


Rule Number Three


“Any thread waiting on a locked object can either throw an exception or wait for it to be unlocked. In which case, it will be passed by some priority order (usually first come first serve, like a queue).”

In general, it is bad practice to have your code throw an exception when it attempts to access a locked object. The more conventional approach is to create a queue for that locked object, like a wait list. This is so that if three threads wanted to access an object, they would access the object in the order that they called the lock.

For example, the execution order might look like this :


1. Thread 1 calls for the object. It is not locked, so Thread 1 gains control of the lock.
2. Thread 2 calls for the object. It is locked. Thread 2 is put on the queue (wait list)
3. Thread 1 performs some operation using the object.
4. Thread 1 relinquishes control of the lock.
5. Thread 3 calls for the object. It is unlocked at this very exact moment, but because there is a queue for this object, the lock goes to the first element in the queue, which is Thread 2. Thread 3 is now put on the queue.
6. Thread 2 performs some operation using the object.
7. Thread 2 relinquishes control of the lock.
8. Thread 3 automatically gains control of the lock, as Thread 3 is next on the queue.
9. Thread 3 performs some operation using the object.
10. Thread 3 relinquishes control of the lock.


As you can see, when multiple threads want access to a locked object, they are accessed in first-come-first-serve order. However, this does not necessarily mean that Thread 2 just has to twiddle its thumb and do nothing. Thread 2 does not have to be blocked while it waits — it can do other computations while it waits for its access to the locked object.

This is important for when the desired operation could take a long time. For example, imagine if Thread 1 wanted access to an object controlling the webcam for your computer. Thread 1 could be using this object for a long time, and if Thread 2 and Thread 3 both sat idly doing nothing, we would experience a severe loss in performance.

 


Conclusion


In this article, you’ve learned that multithreading is difficult because if two threads access an object at the same time, there’s no guarantee that the object will give you the desired behavior. To fix this problem, you simply have to use a lock, which will prevent other threads from accessing the object. When a thread is done performing operations with a locked object, it relinquishes the lock, and the next thread gets control of the lock.

In the next Simply Explained post, I’ll be talking about Futures, which is a programming construct used to obtain a value that does not yet exist (for example, because the object you wanted to use to get that value was locked). It allows you to write non-blocking code, because after you create a Future, your thread can do other tasks until the Future makes a callback, telling you that the value now exists.

Happy coding, and enjoy your new-found multithreading powers.

Advertisements

Singletons are Glorified Globals

You’re working on a snippet of code, and out of the blue, you happen to need a class of which you should only have one instance of, and which needs to be referenced by other classes.

Sounds like a job for a singleton! Or is it?


You want to use a singleton to store state


Realistically speaking, if you’re using a singleton solely to store state, you are doing it wrong. What you’ve made isn’t a singleton — what you’ve made is a bunch of glorified globals. Using singletons to store state seems appealing at first, because you want to avoid using a global and every programmer knows that globals are evil. But using a singleton in this case is just abstracting the globals one layer back so that you can feel happy about your code not having globals.

Furthermore, having globals will make your code unpredictable and bug-prone. This becomes exponentially worse as the number of dependencies in your singleton increases. If you are using a singleton because you need its global properties, it’s because you are not taking advantage of dependency injection (DI) / inversion of control (IoC). You should be passing the state around with IoC, not by creating a giant global and passing around the global fields.


Singletons make your code painfully difficult to test and debug


Singletons can’t be easily tested when they are integrated with other classes. For every class that uses a singleton, you will have to manually mock the singleton and have it return the desired test values. On top of this, singletons are notoriously difficult to debug when multi-threading is involved. Because your singleton is a major dependence for several classes, not only is your code tightly coupled, but you will also suffer from hard-to-find multi-threading bugs due to the uncertain nature of globals.


You want to use a singleton to avoid repeating an expensive IO action


Occasionally, you may be tempted to use a singleton to perform an expensive IO action exactly once, and then store the results. An incredibly common example of this is using a singleton for a database connection. But doing so in addition to having multi-threading will cause massive headaches unless your connection is guaranteed to be thread-safe.

In general, database concurrency will not be easy to implement if you are using a singleton for the connection. What you really want is a database connection pool. By caching the connection, you avoid having to repeatedly close and open new connections, which is expensive. As an added bonus, if you use a connection pool, you simply won’t need to use a singleton.


You want to pass data around, but you don’t need to modify the data


You just need a data transfer object. No need for a singleton here. And worse, giving your singleton access to methods that can modify the data makes your singleton a god object. It knows how to do everything, knows all of the implementation details, and is likely coupled to basically everything, violating almost every software development principle.

Worse yet, using a singleton to provide context is a fatal mistake. If a singleton provides context to all the other classes, then that means every class that interacts with the singleton theoretically has access to all the states/contexts in your program.


You are using a singleton to create a logger


Actually, this is really one of the few acceptable use for a singleton. Why? Because a logger does not pass around data to other classes, provides no context, and there is generally minimal coupling between the logger and the classes that require the logger. All the logger needs to know is that given some log request or string, it should output the log as a file or to the console.

As you can see, a logger will provide nothing to classes that require it — there is nothing to grab from the logger. Therefore, it’s impossible to use the logger as a glorified global container. And best of all, loggers are incredibly easy to test due to how simple they are. These properties make loggers an excellent choice for a singleton.

In the future, you’ll probably find a scenario where you’re considering using a singleton for any of the reasons above. But hopefully, you’ll now realize that singletons are not the answer — inversion of control is.

Checking Whether or Not an Ad Is a Tide Ad with Keras And NumPy

Note : If you’re interested in machine learning, you can get a copy of my E-book, “The Mostly Mathless Guide to TensorFlow Machine Learning” by clicking HERE

There’s a bunch of kids running around with a Coca-Cola in their hands. But hold
on — look at their clothes! They’re so clean and white. Too clean, almost. Could this be a Tide ad?

Machine learning to the rescue! In this article, I’ll be showing you how to use TensorFlow, a machine learning library, to predict whether or not an ad is a Tide ad.

Prerequisites


This tutorial will be using Linux. You can probably do it on Windows too, but you may have to change some things. Here are the things you will need :

1. Python 3
2. TensorFlow (pip3 install tensorflow)
3. Keras (pip3 install keras)
4. ffmpeg (sudo apt-get install ffmpeg)
5. h5py (pip install h5py)
6. HDF5 (sudo apt-get install libhdf5-serial-dev)
7. Pillow (pip3 install pillow)
8. NumPy (pip3 install numpy)

Although VirtualEnv is not required, it is suggested that you use VirtualEnv to prevent any conflicts / version mistakes between Python 2 and Python 3.

Also, you can find all the code and bash scripts here : https://github.com/HenryDangPRG/TideAdIdentifier

Getting Started


First, we need to describe what our neural network will do. In this case, our neural network will will take one image as input, and tell you whether or not that image belongs to a Tide ad or not. Using ffmpeg, we can split a video into its frames to input an entire video into the neural network as well, and if over 50% of the frames in a video are classified as “Tide ads”, then we will consider it to be a Tide ad.

Next, we need data for our neural network to train on. The data will be a large set of .png images that we will get from slicing a video into individual pictures. I will not provide the videos as a download here, so you will need to find the 1 minute 45 second video of all the SuperBowl Tide ads, as well as 5 minutes worth of non-Tide ads. Also, the two videos should have the same size dimensions so that the images that come out are all the same size.

Once you obtain these two videos, convert them into .avi format and use ffmpeg to split them into its constituent frames. I’ve created a simple Bash script that will do the splitting process automatically for you, as long as you name the Tide ad video “tide.avi” and the non-Tide ad video “non_tide.avi”.

You can find the script here : https://github.com/HenryDangPRG/TideAdIdentifier/blob/master/generate_data.sh

The script above will take the two videos, and split 5 frames per second of the video, each frame being 512 x 288, into two separate folders. You can choose to do this on your own as well, but in this tutorial, as a convention, all Tide ad pictures will be in a directory called “tide_ads”, and all non-Tide ad pictures will be in a directory called “non_tide_ads”.

We’ll have to do the same with the test data, and the prediction data, and these are the bash scripts for those:

https://github.com/HenryDangPRG/TideAdIdentifier/blob/master/generate_test.sh
https://github.com/HenryDangPRG/TideAdIdentifier/blob/master/generate_predictions.sh

For the predictions, you can input any video format as the argument for the Bash script, but .avi is suggested for consistency.

NOTE : Remember to use chmod +x BASH_SCRIPT_NAME.sh on all of the Bash scripts so that you can execute them!

Creating A Convolutional Neural Network


Although this analogy is not perfect, you can think of a neural network as a group of students in a classroom who are all shouting out an answer. In this classroom, students are trying to determine whether or not a single image is from a Tide ad. Some students have a louder voice, so their “vote” for an answer counts more. A neural network can be thought of as thousands of students, all shouting different answers. The loudest answer gets passed to the next classroom, and those students discuss the answer (with their answers being modified by the previous classroom’s answer, perhaps by peer pressure), until we reach the very last classroom, where the loudest answer is the answer for the neural network.

201802081619561000

A convolutional neural network (CNN) is similar in that the students are still looking at an image, but they are only looking at a piece of the image. When they finish analyzing this image, they pass it to the next classroom, but the next classroom gets an even tinier piece of the original image. And so on and so forth, with each next classroom’s image getting smaller and smaller. When they’re done, a vote is outputted.

This is different in that in a normal neural network, the students vote for the entire image at once, but in a CNN, they each only vote for a piece of the image.

Now that you know the basics, let’s jump into the code.

Making The Magic Happen


First, let’s define the parameters for our CNN.

from keras.preprocessing.image import ImageDataGenerator
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D
from keras.layers import Activation, Dropout, Flatten, Dense
from keras import backend as K

# Could use larger dimensions, but will make training
# times much much longer
img_width, img_height = (128, 72)

train_dir = 'train_data'
test_dir = 'test_data'

num_train_samples = 4000
num_test_samples = 2000
epochs = 20
batch_size = 8

Each image will be shrunk to 128 x 72 pixels. Although we could go smaller, we would risk losing too much information. Larger could be better, but the larger the image dimensions are, the longer it will take for the CNN to train.

Next, we’ll have to specify in our CNN whether the color channels are first or last. Usually, they are first, though (at least, for png files). Note that an image could have just one color channel if it was grayscale, but in this case, we will only be using color images. We have to specify these because Keras will reduce our images into NumPy arrays, and the ordering matters.

# If data is formatted to have the channels first,
# then stick the RGB channels in front, else put them
# at the end.
if K.image_data_format() == 'channels_first':
    input_shape = (3, img_width, img_height)
else:
    input_shape = (img_width, img_height, 3)

Now that our CNN knows how many color channels (3 means RGB) our pictures have, as well as the dimensions of the image, we can apply three layers of convolution -> RELu -> max pooling.

The convolution step is essentially taking a tiny matrix and multiplying it to sections of the original matrix. When this is done, the result of each multiplication is recorded into a new matrix. The result is that we get a smaller matrix, which is called a feature map, containing unique features of the image that the machine can then process.

Next, we will need to apply ReLu, or Rectified Linear Unit. ReLu is incredibly simple. If the number is negative, make it zero, otherwise, leave it alone. This is because for a neural network, a negative value doesn’t really offer much information in the context of an image.

Imagine if we were detecting whether or not an image has dark blue lines. A value of zero in the feature map just means that there are no dark blue lines, while a positive value means that there might be a dark blue line. If so, then a negative value has no useful meaning, and can just be set to zero. ReLu also makes computations easier because zeroes are incredibly easy to deal with.

Finally, we apply max pooling, which takes subsections of a matrix and extracts the highest value from that subsection. This will shrink the matrix,reducing computation times, as well as giving us the most important parts of the matrix.

For our Tide ad predictor, we’ll use three layers of convolution, ReLu, and max pooling. When we’re done, we’ll put it all together with a fully connected layer.

# First convolution layer
model = Sequential()
model.add(Conv2D(32, (3, 3), input_shape=input_shape))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

# Second convolution layer
model.add(Conv2D(32, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

# Third convolution layer
model.add(Conv2D(64, (3, 3)))
model.add(Activation('relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))

# Convolution is done, so make the fully connected layer
model.add(Flatten())
model.add(Dense(64))
model.add(Activation('relu'))

Then, we apply dropout. Dropout, in our classroom analogy, is like duct taping certain students so that they cannot vote. By using dropout, every student needs to learn to give the right answer, rather than depending on the “smart” students to give the right answers. This gives us an extra layer of redundancy and prevents the loudest students from always overpowering the quiet students.

201802081702131000

In the context of neural networks, it stops overfitting, which is when the neural network becomes too accustomed to the training data and fails to generalize for data that it hasn’t seen before. This can be caused when certain neurons have their weights modified too much by the previous layer’s weights, especially when one neuron has an abnormally high weight. Dropout makes it so that the influence of any one neuron (or group of neurons) is significantly reduced.

From here, everything is pretty self-explanatory. We create some extra data points by slightly modifying the original images, and then plug those into our model. Finally, we train the data and save the completed CNN model.

# perform random transformations so that the
# data is more varied
train_datagen = ImageDataGenerator(
    rescale=1. / 255,
    shear_range=0.2,
    zoom_range=0.2,
    horizontal_flip=True)

test_datagen = ImageDataGenerator(rescale=1. / 255)

# make extra training data by modifying original training images
train_generator = train_datagen.flow_from_directory(
    train_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

# make extra test data by modifying original test images
validation_generator = test_datagen.flow_from_directory(
    test_dir,
    target_size=(img_width, img_height),
    batch_size=batch_size,
    class_mode='binary')

# Train the CNN
model.fit_generator(
    train_generator,
    steps_per_epoch=num_train_samples // batch_size,
    epochs=epochs,
    validation_data=validation_generator,
    validation_steps=num_test_samples // batch_size)

# Saved CNN model for use with predictions
model.save('saved_model.h5')

If you don’t want to spend the time finding videos and training the CNN, the trained model is available in the GitHub repository here : https://github.com/HenryDangPRG/TideAdIdentifier

Predicting Whether a Video Is a Tide Ad


The prediction part is fairly trivial. All we need to do is take a video, turn it into image frames, turn those image frames into NumPy arrays, and then feed them into the trained CNN.

The video splitting part is done by generate_predictions.sh. All this Python code below does is feed those image frames into the CNN. If more than half of the frames aren’t Tide ads, then we can conclude that the video probably isn’t a Tide ad. (note that in the prediction array, a value of 1 means it is NOT a Tide ad, and a value of 0 means that it IS a tide ad)

import numpy as np
import subprocess
from keras.preprocessing import image
from keras.models import load_model

def img_to_array(file_name):
    loaded_img = image.load_img(file_name, target_size=(img_width, img_height))
    img_array = image.img_to_array(loaded_img)
    img_array = np.expand_dims(img_array, axis=0)
    return img_array

if __name__ == "__main__":
    directory = "predictions/"
    num_images = int(subprocess.getoutput("ls predictions | wc -l"))
    img_width, img_height = (128, 72)

    # load the saved model, and use it for prediction
    model = load_model("saved_model.h5")
    images = []

    for i in range(1, num_images+1):
        # Need up to 6 leading zeroes for formatting
        i_ = "{:0>7}".format(i)
        next_img = img_to_array(directory + "prediction-" + str(i_) + ".png")
        images.append(next_img)  
    images = np.vstack(images) 
    prediction = model.predict(images) 
    percent_tide = sum(prediction)[0] * 100 / num_images

    print("There is a " + str(percent_tide) + "% chance this is not a Tide ad.")

    # If more than half the images are NOT Tide ads
    if(sum(prediction) >= num_images / 2):
        print("It can be concluded that this is NOT a Tide ad!")
    else:
        print("It can be concluded that this IS a Tide ad!")

Conclusion


How well does this work? Well, not too well. This neural network uses a pitifully small data set, and was trained for very few epochs, so it makes sense that the results are not particularly accurate.

Inputting in Pepsi’s 2018 SuperBowl ad gives the following result :

There is a 45.833333333333336% chance this is not a Tide ad.
It can be concluded that this IS a Tide ad!

And inputting in Skittle’s 2018 SuperBowl ad gives this result :

There is a 53.4675615212528% chance this is not a Tide ad.
It can be concluded that this is NOT a Tide ad!

So, does this make every SuperBowl ad a Tide ad? Our neural network seems to think it does. Almost.

Everything in Haskell Is a Thunk

There are a lot of misconceptions about Haskell.

Some people think everything in Haskell is a function. Others think that Haskell is just an implementation of Church’s lambda calculus. But both of these are false.

In reality, everything in Haskell is a thunk.

A thunk is a value that has not been evaluated yet. If everything in Haskell is a thunk, then that means that nothing in Haskell is evaluated unless it has to be evaluated.

From a non-functional programmer’s perspective, a thunk may initially seem to be a useless feature. To understand why thunks are useful, consider the following Python code.

x = 5 / 0

Traceback (most recent call last):
  File "", line 1, in 
ZeroDivisionError: division by zero

Unlike Haskell, Python evaluates the results immediately, which means that Python will crash as soon as an error occurs.

On the other hand, Haskell is different. In fact, you can string together dozens of lines of code, all of which contain an error, and no error will appear until one of those lines get evaluated.

let x = 3 `div` 0
let y = x

-- Can even divide x and y!
let z = x `div` y

let stringX = show x

-- Evaluate stringX, which
-- evaluates show x, which evaluates x
-- which finally evaluates 3 `div` 0, causing
-- the error to occur
putStrLn stringX

In this example, no matter what, Haskell will not give an error until something is evaluated. A common misconception is that a function call will always force the results to be evaluated, but that is simply not true.

In the example above, z is assigned the value of x divided by y, and yet there is no error. It is only when the value of stringX is evaluated via putStrLn (which prints a String) that Haskell throws an error.

One way to imagine thunks is to think of every value and function as being wrapped in a box. No one is allowed to peek into the box until the order is given. For all Haskell knows, the box could have anything in it, and Haskell wouldn’t care. As long as there are no type conflicts, Haskell will happily pass the box along.

In the code above, Haskell is fine with dividing two errors (both x and y, when evaluated, will give a division by zero error). However, it is only because x and y are both thunks, and Haskell doesn’t actually know the true values of x and y.

This gives Haskell the benefit of aggressively optimizing code, allowing for massive performance boosts.

Imagine if Haskell had an expensive function, F, that takes in one parameter, takes 10 seconds to run, and returns the evaluated value 5. Say that this same function also existed in Python.

If we were to run F ten times, we would find that Haskell only takes about 10 seconds to finish, while Python would take 100 seconds. Here, Haskell’s functional properties and lazy evaluation allows it to outperform Python.

In Python, functions are allowed to have side-effects. This means that if a Python function is run ten times, with the same input, it can give ten different results. On the other hand, for Haskell, if a function is run ten times, with the same input, it will always give the same result. This means that if there are multiple copies of the same function, called on the same input, Haskell can be certain that those results will all be the same. This is known as referential transparency, and it’s a feature that exists in most functional programming languages.

This property, combined with Haskell’s lazy evaluation, allows Haskell to memoize function calls. Now, if Haskell sees multiple copies of this expensive function called on the exact same input, it will simply evaluate one of the function calls, cache the result, and replace every future function call of F on that same input with the cached result.

What About Infinite Lists?

One consequence of everything being a thunk in Haskell is that Haskell is able to create infinite lists, and pass infinite lists around. In other words, Haskell is able to process and manipulate infinite lists as if they weren’t infinite, because they aren’t. Although the lists are technically infinite, Haskell only takes the elements that it wants.

For example,

let x = [1..]
let y = [1..]

x ++ y
-- concatenate all of x with all of y
-- to infinity
[1, 1, 2, 2, 3, 3, 4, 4, 5, 5 ...]

Here, x is stored to the infinite list from 1 to infinity, and y is stored to the exact same infinite list. Haskell is perfectly fine with storing these infinite data structures, because Haskell does not actually evaluate these structures until it needs to. When they are finally evaluated, they perform exactly like an infinite list — because that is exactly what x and y are.

In Haskell, treating everything as a thunk allows big performance boosts, and also gives the programmer the ability to store infinite data structures. So the next time someone says, “Everything in Haskell is X”, gently remind yourself that unless X is a thunk, they’re most likely wrong.

Common Boat Anchors in Programming and How to Avoid Them

A boat anchor is a metaphor that describes something that is obsolete or useless. In programming, boat anchors are everywhere. If you aren’t careful, boat anchors can ruin your productivity and cause confusion. Luckily, boat anchors are easy to prevent and correct if you look out for them. With just a little bit of foresight, you too can preserve your sanity.

TODO comments


Everyone’s used it. A seemingly innocent TODO comment. No harm done, since it’s just a little reminder for yourself right?

Wrong.

TODO comments are not only useless, but a productivity sink as well. Firstly, the “TODO” that you gave for yourself usually never gets done, as it’s buried in some function or class that you probably won’t even revisit for another week or two. Even if you do see the TODO, you’ll most likely ignore it as you’ll be too focused on your current task.

Secondly, TODO comments are bad because they waste the reader’s time. TODO comments belong in a Trello board or some other organizational system, not in your code base. No one wants to waste time reading about something that you should have/want to have done, but never got around to.

Commented Code


In this day and age, if your organization uses GitHub or some other form of version control, commented code is absolutely useless. This isn’t referring to commenting your code with useful English messages, but rather commenting out chunks of runnable code.

Commented code is essentially noise for the reader. If you push commented code to your repository because you think you might “need it later”, then you are ignoring the key purpose of version control.

Version control saves code. If you ever need that snippet of code that you deleted two months ago, it’s easily accessible. Don’t comment out code, or you’ll end up with hundreds and hundreds of obsolete lines of code.

Uncommented Code That Is Unused


Imagine you’ve written a utility class that calculates mortgages for you.

Suddenly, you’re told that the feature is scrapped, but that it might make a reappearance in the future.

Upon hearing that it might be used again in the future, you decide to leave the utility class in the code, since you might “need it later”. Once again, this goes back to the idea of version control. If it’s not relevant, and if it’s never used or referenced, then it doesn’t belong in your repository. Delete it.

Obsolete Documentation


Let’s say you have a nifty little method that method that divides every number in a list by 4.

def list_divide(list, dividend):
  """
  Divides every number in a list by a
  dividend and returns a new list with 
  the resulting numbers.
  """
  ret_list = []
  
  for num in list:
    ret_list.append(num/dividend)
  
  return ret_list

Suddenly, you remember about division by zero and quickly add a check to your function.

def list_divide(list, dividend):
  """
  Divides every number in a list by a
  dividend and returns a new list with 
  the resulting numbers.
  """

  if dividend == 0:
    print("Dividend must not be zero.")
    return

  ret_list = []
  
  for num in list:
    ret_list.append(num/dividend)
  
  return ret_list

Unfortunately, since you didn’t update the documentation, the docstring is now a lie. It doesn’t tell the user what the function actually does. There is no reference to division by zero in the docstring, causing the documentation to become a boat anchor.

Outdated documentation causes all sorts of issues for users. Luckily, this issue is easy to resolve. As a rule of thumb, if you add additional functionality to a function, document it. Even if you don’t like documentation, or if you think it’s tedious and pointless, do it anyway. You’ll save other programmers from headaches and wasted time.

Adding Ticket Numbers as Comments


Although not extremely common, some people feel the need to compulsively comment every change they make with a ticket number.

For example, you might see something like this :

#Added foobar function to resolve ticket 53
def foobar(num):
  #Fixes ticket 281
  if num == 0:
    print("Hello World!")
  elif num == 3:
    #Added because of ticket 571
    print(num/9)
  #Added else because of ticket 1528
  else:
    #Due to ticket 4713, changed to 'Goodbye World!' 
    print("Goodbye World!")

Before long, everything in your code will have a comment regarding a ticket.

If you are guilty of doing this, stop it. No one cares what ticket number the fix was in. Make the fix, commit, and push. All of the comments about tickets and issue numbers are utterly pointless and end up as unnecessary noise to the reader.

If you feel that the change is so strange or quirky that you absolutely must give some sort of background information, then write a comment. Making the reader search up a specific ticket to understand the code is a gigantic time sink, especially since most tickets have multiple comments. No one wants to read through all that text. Summarize it in a single comment and move on.

Storing Variables For No Reason


It’s a common practice for developers to do something along the lines of this :

def foobar(num):
  return num * 13

#Doing this
num = foobar(32)
print(num)

#As opposed to
print(foobar(32))

In this case, as long as “num” is not set to some wildly long function call, there is no reason to store it as a variable. In this case, foobar(32) is short, so it should be directly inputted into another function, rather than being set to a variable, and then inputted into the function.

Assigning it to a variable when the functional call is short tells the reader, “The variable num is important. It will be reused later in some way somewhere down the line.”

Not only that, but unnecessarily creating variables adds extraneous code for no good reason.

Conclusion


Boat anchors are everywhere in code. If you can avoid them, you will make not only your life easier, but everyone else’s as well. No one wants to waste time reading excessive code. Reading code is already difficult. Making your peers read more code than necessary hurts their productivity and makes coding an unpleasant experience.

So be a good person. Prune off the excess and correct the outdated. If you find any excessive code, then remove it. If the documentation is inaccurate or outdated, fix it. It might be a thankless job, but know that you’ll have made the world just a little brighter in the process.

Canny Edge Detection in Python with OpenCV

NOTE: Are you interested in machine learning? You can get a copy of my TensorFlow machine learning book on Amazon by clicking HERE

In my previous tutorial, Color Detection in Python with OpenCV, I discussed how you could filter out parts of an image by color.

While it will work for detecting objects of a particular color, it doesn’t help if you’re trying to find a multi-colored object.

For this tutorial, we will be using this basket of fruits.

fruitbasket

Let’s say we wanted to detect everything except for the table.

We could try and use color, but that would fail very quickly because all of the fruits are different colors. Since canny edge detection doesn’t rely on color, we can use it to solve our problem.

Canny Edge Detection


If I asked you to draw around the basket, you could easily do it.

But hold on, why is it possible for you to differentiate between the basket and the table?

The answer is that the basket has a different image gradient than the table.

To understand what I mean, take a look at the same image in black and white.

fruitbasket_blackwhite

Even without color, you can clearly see the edges of the basket and fruits because the gradients are vastly different at the edges.

Canny edge detection uses this principle to differentiate between edges. All edges have different gradient intensities than their surroundings. If two adjacent parts of the image have the same gradient intensity, then it wouldn’t be an edge as they would have the same hue and saturation.

Applying Canny Edge Detection


We will be loading the image in as a black and white photo. Images in black and white still have gradients, so it is still possible to differentiate between edges. As a rule of thumb, things tend to be much simpler when they are in black and white.

After we load the image, we need to apply canny edge detection on it.

import cv2
from matplotlib import pyplot as plt

def nothing(x):
    pass


img_noblur = cv2.imread('fruitbasket.jpg', 0)
img = cv2.blur(img_noblur, (7,7))

canny_edge = cv2.Canny(img, 0, 0)

cv2.imshow('image', img)
cv2.imshow('canny_edge', canny_edge)

cv2.createTrackbar('min_value','canny_edge',0,500,nothing)
cv2.createTrackbar('max_value','canny_edge',0,500,nothing)

while(1):
    cv2.imshow('image', img)
    cv2.imshow('canny_edge', canny_edge)
    
    min_value = cv2.getTrackbarPos('min_value', 'canny_edge')
    max_value = cv2.getTrackbarPos('max_value', 'canny_edge')

    canny_edge = cv2.Canny(img, min_value, max_value)
    
    k = cv2.waitKey(37)
    if k == 27:
        break

There are four arguments for cv2.Canny. The first one is your input image. The second and third are the min and max values for the gradient intensity difference to be considered an edge. The fourth is an optional argument which we have left blank.

If the fourth argument is set to true, then it uses a slower and more accurate edge detection algorithm. But by default, it is false and will calculate gradient intensity by adding up the absolute values of the gradient’s X and Y components.

Before canny edge detection can be applied, it is usually a good idea to apply a blur to the image so that random noise doesn’t get detected as an edge.

You can adjust the track bar however you’d like to edit the min and max values for the canny edge detection. I found that a min value of 36 and a max value of 53 worked well.

results

It also depends on how much you are blurring the image. The more you blur the image, the less noise there is. However, blurrier image have less accurate edges.

On the flip side, not enough blurring causes random noise to be detected. In the end, the level of blur is a trade-off between noise and edge accuracy.

Conclusion

Canny edge detection is only one of the many ways to do edge detection. There are hundreds of different edge detection methods, including Sobel, Roberts, SUSAN, Prewitt, and Deriche.

All edge detection methods have pros and cons, and Canny is just one of them. In general, canny edge detection tends to yield good results in most scenarios, so it is well-suited for general use. However, other edge detectors may be better depending on the situation.

If Canny isn’t working effectively for you, try a different solution. The best way to know if something works well is to test it out for yourself.

Speak My Language: i18n and l10n in Django

A user of your app emails you and tells you that they would love if the greeting screen were in Spanish.

def greeting():
  print("Hello!")

You think to yourself, “No problem! I’ll just use a dictionary!”

dictionary = {
  "Hello!":"Hola!",
}

def greeting():
  print(dictionary.get("Hello")

Now, after running greeting(), you get “Hola!”. Perfect! You’re done localizing it into Spanish.

Afterwards, Mr. Foobar requests that you localize it into French as well.

Except, you can’t, because dictionaries are one to one. You can’t add a new key value pair of “Hello!” and “Bonjour!”.

But what you could do is add tons of dictionaries, but then that runs into another problem :

It’s difficult to edit, and your translators are lost and confused because they aren’t programmers.

You could do it with a set of parallel arrays, and have something like:

english = ["Hello!"]
spanish = ["Hola!"]
french = ["Bonjour!"]

But what would happen if you didn’t have one word, but a thousand words, in 10 different languages. What then? Ten parallel arrays with a thousand words in each array? (ignoring the fact that a word in English might be two words in Spanish) And what if the localization isn’t fully complete? Would you just insert blank strings into the array to make the array indexes line up?

There are simply too many issues with parallel arrays, and the amount of vertical space you would need to have a set parallel arrays that large would be bad enough to summon Cthulhu from the depths of R’lyeh.

Most importantly, no sane translator is going to waste time counting the indexes to make sure they’re adding the word into the right index.

We need a solution that is easy to understand for translators, easy to maintain, and doesn’t require writing lots of code.

So what’s the solution proposed by Django?

Django’s Solution

Django uses ugettext to do i18n, or internationalization. Internationalization is the act of making all the strings in your code translatable into a different language.

Commonly, you will see ugettext imported like this:

from django.utils.translation import ugettext as _

For reasons that may forever be unknown to me, ugettext is always aliased as an underscore as common convention.

Conventions aside, let’s create a dummy view in Django to show how ugettext works.

from django.shortcuts import render
from django.http import HttpResponse
from django.utils.translation import ugettext as _

def testView(request):
  output = _('Hello!')
  return HttpResponse(output)

To use ugettext, you simply need to call ugettext(‘STRING_HERE’), or _(“STRING_HERE”) since an underscore is aliased to ugettext.

Next, you will need to create a folder called “locale” in the base directory of your project. Then, edit your settings.py and add:

LOCALE_PATHS = (os.path.join(BASE_DIR, "locale"))
#Gives you BASE_DIR/locale

This tells Python that our locale path should be inside the “locale” folder.

Now comes the fun part. Run django-admin makemessages -l es. This will create a .po file called “django.po” in BASE_DIR/locale/sp/LC_MESSAGES, where “es” is espaƱol, or Spanish.

The .po extension stands for Portable Object, and it is used to hold all of the phrases in the original language and the translated language.

If you open “django.po”, you will see the following:

#, python-format
msgid "Hello!"
msgstr ""

Now, edit the msgstr so that it reads as follows:

#, python-format
msgid "Hello!"
msgstr "Hola!"

We are adding a new translation for the Spanish version of “Hello!”.

Now, we need to compile our .po file to a .mo file, which Django can use to process our translations.

Run django-admin compilemessages -l es

Once the .mo file is created, Django will be able to search for the input word or phrase, and output the translated word or phrase. After this step is done, the localization for Spanish is complete!

But Why Isn’t My Text Translated?

If you ran the code, you may have noticed that the text didn’t translate. And you’re correct.

Why should it?

Localization is intended to provide different languages to specific users. Django differentiates between Spanish and American users by your browser’s locale. Django will only translate the page if your locale is “es”. If you live in the United States, your locale is most likely “en” or “en-us”.

If you want to test your code, simply change your locale to “es”, and the translation will work appropriately.

Conclusion

Django’s solution is essentially a gigantic dictionary split into multiple files, with each language being represented by a .po file. A huge benefit of this is that the code is partially decoupled from the translation. The .po file is generated from the code, but translators are unlikely to crash the website by editing the .po file, as they only have to insert their translation into the msgstr part.

As a result, the programmers are happy, as the code is separate from the translations, and the translators are happy, as the translations are separate from the code. It’s easy for both parties to access, meaning translators don’t have to fidget with the code, and programmers don’t have to fidget with long bundles of translated phrases.

Django’s solution is a perfect win-win for everyone.