How To Maximize Job Security By Secretly Writing Bad Code

Disclaimer: While the tips in this code are absolutely true and will make your code unmaintainable, this post is satire and should be treated as such. Please don’t purposely write bad code in production.

By a sudden stroke of bad luck, half of your team has been laid off due to a lack of budget. Rumor has it that you’re next. Fortunately, you know a little secret trick to ensuring job security in your cushy software development position — by writing bad code of course!

After all, if no one can setup, deploy, edit, or read your code, then that means you’re now a “critical” member of the team. If you’re the only one who can edit the code, then you obviously can’t be fired!

But your code has to pass a bare minimum quality, or else others will catch on to how terrible and nefarious you are. That’s why today, I’m going to teach you how to maximize your job security by secretly writing bad code!

Couple Everything Together, Especially If It Has Side Effects

A little known secret is that you can write perfectly “clean” looking code, but still inject lots of potential bugs and issues into your code, just by unnecessarily flooding it with I/O and side effects.

For example, suppose you’re writing a feature that will accept a CSV file, parse and mutate it, insert the contents into a database, and then also insert those contents into a view via an API call to the database.

Like a good programmer, you could split out the obvious side effects (accepting the CSV, inserting data into a database, inserting data into a view, calling the API and fetching the data) into separate functions. But since you want to sneakily write bad code until the guise of being clean code, you shouldn’t do this.

What you should do, instead, is hardcode the CSV name and bundle all of the CSV parsing, mutation, and all of the insertion into one function. This guarantees that no one will ever be able to write a test for your code. Sure, someone could attempt to mock all of the side effects out, but since you’ve inconveniently bundled all of your side effects together, there isn’t any way for someone to easily do this.

How do we test this functionality? Is it even testable? Who knows?

If someone were insane enough to try, they would first have to mock the CSV out, then mock the CSV reading functionality, then mock the database, then mock the API call, and then finally test the mutation functionality using the three mocks.

But since we’ve gone through the lovely effort of putting everything in one function, if we wanted to test this for a JSON file instead, we would have to re-do all of the mocks. This is because we basically did an integration test on all of those components, rather than unit tests on the individual pieces. We’ve guaranteed that they all work when put together, but we haven’t actually proven that any of the individual components work at all.

The main takeaway here — insert as many side effects into as few functions as possible. By not separating things out, you force everyone to have to mock things in order to test them. Eventually, you get a critical number of side effects, at which point you need so many mocks to test the code that it is no longer worth the effort.

Hard to test code is unmaintainable code, because no sane person would ever refactor a code base that has no tests!

Create Staircases With Nulls

This is one of the oldest tricks in the book, and almost everyone knows this, but one of the easiest ways to destroy a code base is to flood it with nulls. Try and catch is just too difficult for you. And make sure you never use Options/Optionals/Promises/Maybes. Those things are basically monads, and monads are complicated.

You don’t want to learn new things, because that would make you a marketable and useful employee. So the best way to handle nulls is to stick to the old fashioned way of nesting if statements.

In other words, why do this:

try:
    database = mysql_connector.connect(...)
    cursor = mydb.cursor()
    query = "SELECT * FROM YourTable"
    cursor.execute(query)
    query_results = cursor.fetchall()
...
...
except:
....

When you could instead do this?

database = None
cursor = None
query = None
query_results = None

database = mysql_connector.connect(...)
if(database == None):
    print("Oh no, database failed to connect!")
    cursor = database.cursor()
    if(cursor == None):
        print("Oh no, cursor is missing!")
    else:
        query = "SELECT * FROM YourTable"
        if(query == None):
            print("Honestly, there's no way this would be None")
        else:
            cursor.execute(query)
            query_results = cursor.fetchall()
            if(query_results == None):
                print("Wow! I got the query_results successfully!")
            else:
                print("Oh no, query_results is None")

The dreaded downward staircase in this code represents your career and integrity as a programmer spiraling into the hopeless, bleak, and desolate void. But at least you’ve got your job security.

Write Clever Code/Abstractions

Suppose you’re, for some reason, calculating the 19428th Fibonacci term. While you could write out the iterative solution that takes maybe ten lines of code, why do that when you could instead write it in one line?

Using Binet’s formula, you can just approximate the term in a single line of code! Short code is always better than long code, so that means your one liner is the best solution.

But often times, the cleverest code is the code that abstracts for the sake of abstraction. Suppose you’re writing in Java, and you have a very simple bean class called Vehicle, which contains “Wheel” objects. Despite the fact that these Wheel objects only take one parameter for their constructor, and despite the fact that the only parameters that this car takes are its wheels, you know in your heart that the best option is to create a Factory for your car, so that all four wheels can be populated at once by the factory.

After all, factories equal encapsulation, and encapsulation equals clean code, so creating a CarFactory is obviously the best choice in this scenario. Since this is a bean class, we really ought to call it a CarBeanFactory.

Sometime later, we realize that some drivers might even have four different kinds of wheels. But that’s not an issue, because we can just make it abstract, so we now have AbstractBeanCarFactory. And we really only need one of these factories, and since the Singleton design pattern is so easy to implement, we can just turn this into a SingletonAbstractBeanCarFactory.

At this point, you might be shaking your head, thinking, “Henry, this is stupid. I might be trying to purposely write bad code for my own job security, but no sane engineer would ever approve that garbage in a code review.”

And so, I present to you Java’s Spring Framework, featuring:

Surely, no framework could do anything worse than that.

And you would be incredibly, bafflingly, and laughably wrong. Introducing, an abstraction whose name is so long that it doesn’t even fit properly on my blog: HasThisTypePatternTriedToSneakInSomeGenericOrParameterizedTypePatternMatchingStuffAnywhereVisitor

Conclusion

Bad code means no one can edit your code. If no one can edit your code, and you’re writing mission critical software, then that means you can’t be replaced. Instant job security!

In order to write subtle yet destructively bad code, remember to always couple things together, create massive staircases with null checks (under the guise of being “fault-tolerant” and “safe”), and to write as many clever abstractions as you can.

If you ever think you’ve gone over the top with a clever abstraction, you only have to look for a more absurd abstraction in the Spring framework.

In the event that anyone attempts to argue with your abstractions, gently remind them:

  1. Spring is used by many Fortune 500 companies.
  2. Fortune 500 companies tend to pick good frameworks.
  3. Therefore Spring is a good framework.
  4. Spring has abstractions like SimpleBeanFactoryAwareAspectInstanceFactory
  5. Therefore, these kinds of abstractions are always good and never overengineered.
  6. Therefore, your SingletonBeanInstanceProxyCarFactory is good and not overengineered.

Thanks to this very sound logic, and your clean-looking but secretly bad code, you’ve guaranteed yourself a cushy software development job with tons of job security.

Congratulations, you’ve achieved the American dream.

Advertisements

The Five Year Old’s Guide To ReactJS

ReactJS is a front-end library that only handles the “view” side of a website. In other words, ReactJS only handles the user interface, meaning you would have to get a separate library to manage the backend.

Fortunately, ReactJS is one of easier front-end libraries to learn because of its intuitive and straightforward nature. With that being said, “intuitive” is a rather subjective term, and since this is the five year old’s guide to ReactJS, let’s jump right into how ReactJS works with a simple analogy.

lego-3388163_1920

ReactJS is basically a Lego set. Like how a Lego building is made out of smaller Legos, a ReactJS website is made from smaller components, which are reusable bundles of HTML and JavaScript (React bundles them together into a DSL called JSX). Each component can store its own state, and can be passed in “props”, which allow you to customize the component’s behavior.

Making a UI with ReactJS is incredibly easy, since you are effectively connecting and stacking a bunch of components together until you have a full website. Let’s take a simple example.

DivAB.png

Let’s start with a simple page that we’ll divide into two divs. Now let’s suppose we’d like to put a sidebar into div A. This sidebar should have clickable links. How would we do this?

First, we would create a blank component called “Sidebar”.

ComponentSidebar1.png

Remember that components are just bundles of HTML and JavaScript. So from here, inserting a bunch of links with some CSS styling would give us a functional sidebar component.

ComponentSidebar2

In case you’re the “show me the code’ kind of person, here’s an unstyled version of what that component might look like in ReactJS.

class Sidebar extends React.Component {    
  render() {    
    return (    
      <div&gt;    
        <ul&gt;
          <li&gt;<a href="/home"&gt;Home</a&gt;</li&gt;
          <li&gt;<a href="/about"&gt;About</a&gt;</li&gt;
          <li&gt;<a href="/etc"&gt;Etc</a&gt;</li&gt;
        </ul&gt;
      </div&gt;
    );    
  }    
}    

Now, we actually have to create a component, which we’ll call “App”, to represent the entire web page. Once we’ve created this, we can then insert our component into the “App” component. In other words, we’re building components out of other, smaller components.

const App = () =&gt; {
  return (
    <div className="App"&gt;
      <div className="divA"&gt;
          // This is the sidebar component 
          // that we just built!
          <Sidebar&gt;</Sidebar&gt;
      </div&gt;
      <div className="divB"&gt;
        // Put content in div B later
      </div&gt;
    </div&gt;
  );
}

From a graphical perspective, this is what we’ve done (both App and Sidebar are components):

This is fine and all, but suppose you wanted this sidebar to have different links, or perhaps have the links updated based on some sort of database query. With our current implementation, we have no way of doing this without editing the Sidebar class and hard-coding those links.

Fortunately, there’s a solution for this, which is to use props. Props are like inputs for a component, and with them, you can dynamically and flexibly configure a component. Just like how functions can accept parameters, components can accept props.

However, there are two rules for using props.

  1. Components shouldn’t mutate their props
  2. Components must only pass their props and any other data downstream.

The first rule just means that if you get some props as input, they should not change. The second rule isn’t a hard and fast rule, but if you want clean, debuggable code, you should try your best to always pass data/props unidirectionally.

DON’T pass props upwards to a component’s parent. The short version of why is that passing props upwards causes your components to be tightly coupled together. What you really want is a bunch of components that don’t care about who their parent or child components are. If you ever have to pass a prop up the hierarchy, you’re probably doing something wrong.

With that being said, one of the most common ways to use props is through props.children, which is best explained through a code example.

class Sidebar extends React.Component {    
  constructor(props){
    super(props);
  }

  render() {    
    return (    
      <div&gt;    
        <ul&gt;
          {props.children}
        </ul&gt;
      </div&gt;
    );    
  }    
}

Props are used for when you want to configure components in a flexible way. In this example, the Sidebar component no longer has pre-defined links anymore. Instead, you now need to pass them in via the App component.

const App = () =&gt; {
  return (
    <div className="App"&gt;
      <div className="divA"&gt;
          // The stuff in-between the Sidebar tags
          // is the props.children
          <Sidebar&gt;
            <li&gt;<a href="/home"&gt;Home</a&gt;</li&gt;
            <li&gt;<a href="/about"&gt;About</a&gt;</li&gt;
            <li&gt;<a href="/etc"&gt;Etc</a&gt;</li&gt;
          </Sidebar&gt;
      </div&gt;
      <div className="divB"&gt;
        // Put content in div B later
      </div&gt;
    </div&gt;
  );
}

This means you can now create infinitely many combinations of the Sidebar component, each one having whatever links you want!

const App = () =&gt; {
  return (
    <div className="App"&gt;
      <div className="divA"&gt;
          // In the diagram below, blue text
          <Sidebar&gt;
            <li&gt;<a href="/home"&gt;Home</a&gt;</li&gt;
            <li&gt;<a href="/about"&gt;About</a&gt;</li&gt;
            <li&gt;<a href="/etc"&gt;Etc</a&gt;</li&gt;
          </Sidebar&gt;
 
          // In the diagram below, red text
          <Sidebar&gt;
            <li&gt;<a href="/contact"&gt;Contact</a&gt;</li&gt;
            <li&gt;<a href="/shop"&gt;Shop</a&gt;</li&gt;
          </Sidebar&gt;
      </div&gt;
      <div className="divB"&gt;
        // Put content in div B later
      </div&gt;
    </div&gt;
  );
}

You can even pass in props through the component’s field! Let’s say you have a “Profile” page, and you need to change the “Name” section based on who is logged in. With components, you can simply have the parent get the name, and then pass that name as a prop to the required children components!

const Profile = (props) =&gt; {
  return (
    <div className="Profile"&gt;
      // You can define a function to fetch
      // the name from the backend, and then
      // pass it into the { ..... } 

      // Here is a hard-coded example
      // <NameParagraph personName="John"&gt;
      // </NameParagraph&gt;
      <NameParagraph personName="{ .... }&gt;
      </NameParagraph&gt;
    </div&gt;
  );
}

class NameParagraph extends React.Component {    
  constructor(props){
    super(props);
  }

  render() {    
    return (    
      <div&gt;    
        {props.personName}
      </div&gt;
    );    
  }    
}

Notice that there are no weird shenanigans happening here. The NameParagraph is not fetching the name on its own, it isn’t passing any callbacks or props to its parent, it’s a simple downstream flow of data.

States, props, and components form the core of how React works at a fundamental level. You create components, and then weave them together to create larger components. Once you’ve done this enough times, you’ll finally have one giant component that makes up your web page.

This modular approach, along with the downstream flow of data, makes debugging a fairly simple process. This is largely because components are supposed to exist independent of other components.

Just like how you can detach a Lego piece from a giant Lego sculpture and use it anywhere else, in any other Lego project, you should be able to take any component in your code and embed it anywhere else in any other component.

Here’s a short and sweet haiku to sum up this all up:

When using React

Code as if your components

Were Lego pieces.

Why You’ll Probably Never Use Haskell in Production

If you’ve ever chatted with a Haskell programmer, you’ll probably hear that using Haskell in production is one of the greatest joys in life. But the grim reality is that Haskell just doesn’t quite work out in most corporate environments. As a pure functional programming language that was born from the depths of academia, there are several issues that make most developers hesitant to adopt Haskell.

So without further-a-do, I’m going to explain to you why you’ll probably never use Haskell in production.

 


Haskell isn’t a popular programming language


Unfortunately, the reality of Haskell is that very few people know how to code in Haskell, and even fewer are good enough to use it to the same level of productivity as the average Python or Java developer. This causes two big issues.

  1. Hiring a Haskell developer is much harder than hiring a Java, C++, or Python developer.
  2. Teams are unlikely to switch to Haskell because the majority of team members probably won’t know Haskell

Because of these two issues, it’s nearly impossible to use Haskell in production without having a team that is already fluent in Haskell. However, because it’s much harder to hire a Haskell developer than a Java/Python developer, it also becomes highly unlikely that a team of Haskellers will ever be formed.

One common argument is that if Haskell is a perfect fit for your business requirements, then you can just teach your team Haskell. After all, developers are capable of learning new technology stacks on the fly, so there’s no need for everyone to already know Haskell.

While this sounds good on paper, it’s not actually possible for developers to just “learn” Haskell, because…

 


Haskell has a massive learning curve


Any developer worth their salt should be able to learn a new OOP programming language in under two weeks. If you already know a couple of OOP languages, then learning another OOP language is incredibly easy, because all of your knowledge transfers over.

But Haskell doesn’t work in the same way, because it’s a completely different paradigm, and sets itself apart even from other FP languages. Its learning curve is by far the steepest out of any programming language I’ve worked with. Not to mention, it’s probably one of the few programming languages in the world where people will endorse reading and sharing research papers to learn Haskell.

That’s not all though, because when it comes to Haskell, your programming experience works against you. It doesn’t take two weeks to learn Haskell. It takes months to learn Haskell, and likely several more to be productive in it. The issue is that since you already have certain expectations of what programming is about, you’ll have preconceptions on how things might be done in Haskell. And all of those preconceptions will be wrong.

For example, in most OOP languages (eg; Java and C#), a function that returns an integer doesn’t really return an integer. It returns an integer or null, and this is a problem in Haskell. In Haskell, if your value has a chance of returning “null”, then you need to wrap it with the Maybe monad.

But why do we even need monads? In an OOP language, everyone got along just fine without them (mostly by doing null checks everywhere).  So why do we need monads in Haskell if we never had to use one in any other programming language?

And what is a monad anyway? Unfortunately, monads are notoriously hard to explain. They’re so hard, in fact, that there’s an entire meme about monad tutorials. Some people will tell you that monads are burritos, others will say monads are boxes, and really, none of them will do much to help you understand monads. You’ll have to work with monads in order to understand them, and when you do understand them, you’ll wonder why you were ever confused about them in the first place. 

When it comes to Haskell, monads are essentially brick walls for beginners. Most people will simply quit when they realize that almost every Haskell library in the world requires monads. But suppose someone does get over that gap. Understanding monads in general, and understanding monads in a standard Haskell library are two completely separate things, and so we see our third problem.

 


Haskell has very few mature and documented libraries


Technically, I’m cheating by listing two points in one, but they’re both very relevant to the issue at hand.

Firstly, when it comes to libraries, Haskell only has a handful of good libraries. These are the ones that almost everyone knows — Parsec, Yesod, pandocaeson, etc. These libraries have good documentation, in fact, some of them are spectacular, like how Yesod literally has a book for its documentation.

But those are just the mature and well-known libraries. But what if you want to use some relatively less well-known libraries? Tough luck, because those libraries almost never have any documentation.

In fact, when it comes to documentation, a lot of experienced Haskellers will tout that the only documentation they need are the function names and type signatures. And it’s true, you can usually figure out a lot from those two, and the Theorems for Free paper is a great read if you’re looking to learn more.

But is it really enough? For most developers, we want to see example code. For popular libraries, sure, there are thousands of examples, but for a smaller library? You’d be lucky for them to even have a useful README page. Working with poor, undocumented Haskell libraries is something you need to experience for yourself to understand why this is such a deal breaker.

All-in-all, Haskell’s poor library ecosystem makes it incredibly risky for anyone to use Haskell as a true “general purpose” programming language, because Haskell only has mature libraries for a small handful of fields. If your problem domain isn’t completely solved by some combination of Haskell’s popular libraries, then you should probably choose a different language.

 


Conclusion


If you’re looking to use a functional programming language in a production environment, Haskell is likely a bad call.

Very few people know Haskell to start with, and its insane learning curve means that it will take a long time before a developer reaches a proficient level with Haskell. Combine this with Haskell’s poor library ecosystem, and any plans for using Haskell in production will likely come to a grinding halt.

For a normal production environment, with a normal team, you are better off using F# or Clojure. Both of these languages have much more gentle learning curves, better documentation, and a huge library ecosystem since they were built with interoperability in mind. F# has complete access to all .NET libraries (and it’s directly supported by Microsoft!), and Clojure runs on the JVM, so you can run any Java library on it. Both of these are much safer choices than Haskell, and if neither of them work out, there’s always the option of using Scala (which also runs on the JVM).

But maybe you’re one of the lucky few who are working on a corporate Haskell project, and there are several examples, like how Facebook created Haxl and Sigma for spam detection.

But these are the exceptions, rather than the rule. Most of the time, people just want to get things done, and unless the stars align with you having exactly the right team with exactly the right problem, Haskell is almost never going to provide a solution that is the fastest, safest, nor the cheapest.

Why I Will Never Use DRM Protection On My Books

When I tell other bloggers that I don’t use DRM on my e-books, they look at me as if I’m insane. If you’re not familiar with DRM, DRM is Digital Rights Management, and it’s basically copyright protection for your books. It works by using software to block unauthorized access to content, often by forcing you to use a specific ebook-reader.

In the case of my book, The Mostly Mathless Guide To TensorFlow Machine Learning, if I had applied DRM onto my ebook, it would mean that you’d have to login through Amazon’s services in order to read the book, and that you’d have to use their specific ebook reader. As you can see, DRM allows you to protect your books from piracy, but my book doesn’t have any DRM at all. But why?

The first thing to recognize is that there are three core assumptions that are being made here:

  1. The assumption that all consumers will pirate your book if given the chance, instead of buying it legally.
  2. The assumption that DRM is effective at protecting your ebooks and will boost sales.
  3. The assumption that DRM is not harmful to your readers.

With that begin said, let’s analyze each of these one at a time.

 


Are Consumers Just Evil Pirates?


A lot of people are under the impression that if people can obtain an ebook illegally, then they will. Fortunately, this misconception is the easiest one to debunk with statistics, as this one isn’t even remotely true. According to a study of about 35,000 individuals, only “5% said that they currently pirate books”. If we assume that consumers will just pirate everything, then we also run the risk of pessimistically assuming that consumers are immoral and will illegally steal from authors if given the chance.

However, you might be thinking that 5% is still a significant number of pirates, and you would be right. If we only looked at this single point, then any normal person would conclude that adding DRM to your books would give a slight boost to sales.

So what does this mean? That DRM is the way to go, and that it’ll really protect your book from piracy and boost your sales? Well, not exactly.

 


Is DRM Effective At Protecting Your Ebooks, And Will It Boost Sales?


If you’re reading this blog, then you’re probably a programmer or computer scientist. If so, then you already know the answer to this question. DRM is a complete joke, and almost every tech-savvy person knows how easy it is to bypass DRM protections. In the case of my ebook, The Mostly Mathless Guide To TensorFlow Machine Learning, there’s basically no point in adding DRM protection, because my target audience is tech-savvy programmers. If they wanted to crack the DRM, they could easily do so.

Even Tim O’Reilly, the founder of O’Reilly Media, hates DRM. And it should be noted that O’Reilly Media, one of the largest and most famous programming/technology book publisher in the United States, doesn’t use any DRM for any of its books. This is already a huge red flag for DRM.

In fact, DRM is so ineffective that the music industry doesn’t even use DRM to protect its audio files anymore, and removing DRM from their music actually boosts sales by 10%! It turns out that people who pirate music actually end up spreading the word about said music, causing an increase in popularity and sales. And people who pirate music may eventually even become future, paying consumers. Less popular albums even managed to increase their sales by a whopping 30%, just by removing DRM from their audio files.

 


Is DRM Harmful To Your Readers?


The short answer is yes.

The long answer is also yes, because it inhibits innovation, research, and is blatantly against the principles of fair use.

Suppose you were to purchase a physical book from a store. You are now free to do whatever you want with that book. You could burn it, resell it, store it away in a box, share it with your friends, or use and reproduce it for research, analytical, or educational purposes. Now suppose a team of researchers wanted to use a single chapter from your book as a source. Due to fair use, there is no issue in passing the book around, and printing out paper copies of that chapter, as long as the book is used for a limited period of time and is used strictly for research purposes.

Now what if that same team of researchers had bought a DRM protected ebook? Well, they’re out of luck, because DRM prevents them from sending a copy of the ebook to their peers, and it also prevents them from printing out a physical copy of the ebook. If that team of researchers wanted to read through this book in parallel, they would need to buy three copies of the ebook, even though this situation should clearly fall under fair use laws.

Most notably, if you’re reviewing a DRM-protected ebook (eg; if you’re a professional book reviewer), you aren’t even allowed to share that DRM-protected ebook to your work colleagues, because the DRM literally prevents you from doing so.

Not only that, when you use DRM on your ebooks, you are forcing your reader into using a specific platform or software to read that ebook. Suppose a reader wants to use their own personal ebook reader, instead of the one that the platform is forcing them to use. Unfortunately, they can’t do that. And this is especially a problem for some demographics, like blind readers, who need additional accessibility features to read the book that only a specific ebook reader can provide.

Perhaps the most annoying part about DRM is that many ebook readers, like Adobe, don’t allow you to copy and paste anything from the protected ebook. Suppose you want to copy a long quote? You can’t. You need to manually write it out, or crack the DRM so that you can copy and paste it. This alone is already enough justification to not use DRM, especially in programming books where copying and pasting code from the book is generally much more preferable to typing out each and every single character on your own.

 


Conclusion


DRM causes massive headaches for your readers, not only because they force you to use a specific ebook reader, but also because these ebook readers often lack a lot of features, or prevent you from using certain features (eg; copy and pasting/printing). Furthermore, DRM is incredibly easy to crack, so having DRM isn’t even an effective method of protecting your books from piracy.

Not only that, but DRM protection hurts researchers and educational institutions that want to use your book, because fair use laws don’t properly apply when it comes to DRM-protected books.

Additionally, it turns out that DRM protection even hurts your net profit, as removing DRM protection can actually cause up to a 10% increase in profits!

If you want to maximize profit, maximize morality, and maximize convenience, then the obvious action is to leave your ebooks DRM-free. There are simply no good reasons for why you should publish a DRM-protected book. Let your readers enjoy your book, with their preferred e-book reader, without all of the extra nonsense and complexity that DRM protection adds. The music industry has already given up on DRM, and the ebook industry is likely soon to follow. Authors should give their readers the best reading experience possible.

Think about all the times you struggled to set up accounts and click on confirmation emails, just to read a single DRM-protected ebook, or when you wanted to copy and paste an excerpt from a book, but couldn’t because of the DRM-protection. Is that really the kind of experience you want to give your readers?

Why You Should Value Function Over Form With Window Managers

Many developers like to spend an excessive amount of time ricing their Linux distros, usually with window managers (WMs) like Awesome, i3wm, and monsterwm. Of course, window managers are often chosen because of their aesthetic, but in many cases, you should value function over form with window managers.

If you’re not familiar with window managers, they essentially split your screen into discrete sections, and assign windows into those sections. You’re free to resize these at any time so that certain windows can occupy more space, and depending on which window manager you choose, you can also stack windows on top of each other, place them in a tabbed view, and switch between multiple different screens. This, combined with the use of workspaces, allows you to conveniently jump back and forth between different sets of windows in an instant.

A screen-capture of i3wm, a popular window manager.

The image above is a bare-basic i3wm setup, with Terminator as the terminal and the standard i3statusbar at the bottom. Aside from modifying the status bar and setting Terminator to use solarized-dark colors, this is an un-riced setup. Typically, ricers will begin the ricing process by using Conky.


The Art of Displaying Tons of Irrelevant Information With Conky


Many ricers love to display an excessive amount of irrelevant information on their desktop as a way to make their screen look fancy and technical. The more scary and overwhelming it looks, the better. Usually, this is done by using Conky, a software whose purpose is to display information on your desktop. At first, this doesn’t sound particularly interesting, but if you’ve ever looked at a riced desktop, then you know first-hand how impressive Conky can make your desktop look.

Conky in action.

I know what you’re thinking — you’re probably thinking, “That’s amazing! I want my desktop to look like that too! Time to close this article and look up a ricing guide“. It’s a trap. Speaking from personal experience, ricing is a massive time-sink. Imagine you’re trying to make a website look fancy by using CSS. Now, swap out the CSS for Lua.

The great thing about making Conky scripts in Lua is that there are no selectors, meaning you need to constantly copy and paste the styling over and over again. For example,  suppose you have yellow text in Times New Roman, font 12. If you want to make ten separate sets of words with that exact styling, you will have to simply copy and paste the exact same styling ten times.

Here’s an example from the Conky setup listed above, which you can find on Conky’s official GitHub page at https://github.com/xyphanajay/conky/blob/master/.conkyrc1

conky.text = [[
${font DejaVu Sans Mono:size=14}${alignc}${time %I:%M:%S}
${font Impact:size=10}${alignc}${time %A, %B %e, %Y}
${font Entopia:size=12}${color orange}CALENDAR ${hr 2}$color
${font DejaVu Sans Mono:size=9}${execpi 1800 DA=`date +%_d`; cal | sed s/"\(^\|[^0-9]\)$DA"'\b'/'\1${color orange}'"$DA"'$color'/}
${font Entopia:bold:size=12}${color red}FILE SYSTEM ${hr 2}${font Noto sans:size=8}
#${offset 4}${color}dev ${alignr}FREE     USED
${offset 4}${color}root (${fs_type /}) ${color yellow}${alignr}${fs_free /} ${fs_used /}
${offset 4}${color yellow}${fs_size /} ${color}${fs_bar 4 /}
${offset 4}${color FFFDE2}home (${fs_type /home}) ${color yellow}${alignr}${fs_free /home/} ${fs_used /home/}
${offset 4}${color yellow}${fs_size /home/} $color${fs_bar 4 /home/}
${offset 4}${color FFFDE2}sda5 (${fs_type /run/media/senpai/6EC832DEC832A3ED/}) ${color yellow}${alignr}${fs_free /run/media/senpai/6EC832DEC832A3ED/} ${fs_used /run/media/senpai/6EC832DEC832A3ED/}
${offset 4}${color yellow}${fs_size /run/media/senpai/6EC832DEC832A3ED/} $color${fs_bar 4 /run/media/senpai/6EC832DEC832A3ED/}
${offset 4}${color FFFDE2}sda6 (${fs_type /run/media/senpai/6EC832DEC832A3ED/}) ${color yellow}${alignr}${fs_free /run/media/senpai/C208F88708F87BAB/} ${fs_used /run/media/senpai/C208F88708F87BAB/}
${offset 4}${color yellow}${fs_size /run/media/senpai/C208F88708F87BAB/} $color${fs_bar 4 /run/media/senpai/C208F88708F87BAB/}
${font Entopia:bold:size=12}${color green}CPU ${hr 2}
${offset 4}${color black}${cpugraph F600AA 5000a0}
${offset 4}${font DejaVu Sans Mono:size=9}${color white}CPU: $cpu% ${color red}${cpubar 6}
${font Entopia:bold:size=12}${color 00FFD0}Network ${hr 2}  
${color black}${downspeedgraph enp8s0 32,80 ff0000 0000ff}${color black}${upspeedgraph enp8s0 32,80 0000ff ff0000}
$color${font DejaVu Sans Mono:size=8}▼ ${downspeed enp8s0}${alignc}${color green} IPv6${alignr}${color}▲ ${upspeed enp8s0}
${color black}${downspeedgraph wlp10s0f0 32,80 ff0000 0000ff}${color black}${upspeedgraph wlp10s0f0 32,80 0000ff ff0000}
$color${font DejaVu Sans Mono:size=8} ▼ ${downspeed wlp10s0f0}${alignc}${color orange} ${wireless_essid wlp10s0f0}${alignr}${color}▲ ${upspeed wlp10s0f0}
${font Entopia:bold:size=12}${color F600AA}Disk I/O ${hr 2}
${alignc}${font}${color white}SSD vs HDD $mpd_name
${color black}${diskiograph /dev/sda 32,80 a0af00 00110f}${diskiograph /dev/sdb 32,80 f0000f 0f0f00}
${font DejaVu Sans Mono:size=8}${color white}   ${diskio /dev/sda}${alignr}${diskio /dev/sdb}
]]

It’s not exactly the prettiest.

But there’s more. With Conky, your desktop doesn’t act like a normal web page, it acts more like a canvas where everything is practically absolutely positioned. Finally, because you’ll likely be too lazy to learn Lua and Conky’s many nuances just for the sake of ricing, you’ll likely resort to to copying and pasting snippets from all over the internet, and with each copy and pasted snippet you add, your configuration becomes even more hacky and unreadable.


If You Value Your Screen Real Estate, Don’t Rice Your Desktop


Ricing your desktop almost always results in massive amounts of frustration, wasted time, and tons of exerted effort. The fun of showing off your setup only lasts for a few brief moments. After that, your riced desktop is basically useless or obstructive, because you’ll either have your Conky setup overlay the desktop, meaning it will be blocked out when you have a window over it, or overlay all of your windows, which means it’s permanently visible.

If you choose to have Conky only on your desktop, it will be 100% covered by whatever window you have open, as a window manager will consume as much desktop space as possible. Since you’ll almost always have tons of terminals or programs running on most of your workspaces, you’ll almost never see your desktop. Once the novelty wears off, your desktop will feel as ordinary as before. If you decide to automate everything, which you likely will do, you’ll have programs automatically opened on all your separate workspaces anyway.

If you choose to have Conky overlay all of your windows, you will quickly realize how annoying it is to have all of that extraneous information plastered over your screen at all times. In most cases, it’s distracting and reduces the amount of visible space on your desktop. If you choose this option, you basically lose anywhere from 20-40% of your screen’s real estate. Assuming that the code you’re working on is limited to 80 characters per line, you won’t be able to split your windows vertically because that would reduce each window’s real estate to only ~30% of the screen, so 80 character lines won’t even fit on your screen.

 

It might be cool to have something like the Conky animation above on your desktop, but if it’s eating up tons of precious desktop real estate, then it’s not worth it. With an animation like this, you won’t be able to read code, debuggers, or terminal/console logs without having your window in full screen. It goes without saying that there’s no difference between using a window manager and a standard desktop environment like GNOME if you’re only going to view things in full-screen mode..

Ricing your desktop might look pretty, but in my experience, it has always turned out to be a massive waste of time, either because the novelty wore off or because it eventually became annoying. The most efficient way to use a window manager is to use it for what it’s made for — maximizing screen real estate. By using 100% of your screen, leaving no blank spots, you’ll be able to maximize the amount of information you can view at once.

Incidentally, by ricing your desktop, you’ll likely either reduce the amount of information you can view at once, or you’ll end up plastering large amounts of useless information on your desktop, both of which will end up reducing your productivity. Should you decide to use a window manager, remember to value function over form, because at the end of the day, computers are tools, not decorations.