Snap Packages In Linux

When installing a snap package in Linux, Ubuntu users have snapd, a snap daemon, already installed into their system. This means that as of 18.04.x you can now install snap packages as sudo snap packagename. Snaps make it easy to install a package with all of its current and up-to-date dependencies. Snaps run in containerised sandboxes so security is a priority and malware packages can’t actively change the system without user approval. Previously, a Linux user was tortured by broken packages or unmet dependencies. This is quickly becoming a thing of the past. With so much support for the new package management system, users of all distros and desktop environments can take part in having the latest and greatest software at their disposal. Even Windows is getting interested in this marvel of the modern age of computing.

Manjaro and Arch users may not yet automatically have snapd installed, to ensure that it is installed on your system using pacman, type <sudo pacman -S –noconfirm snapd> in a terminal. Snap is a repository where vendors directly upload their finished projects. Opera has recently uploaded all of their desktop browser versions to the snap store.

  1. To list a package by name, type <snap find packagename>.

  2. To install a package, type <sudo snap install package>.

  3. To revert or downgrade a package in snap, type <sudo snap revert package>.

  4. To update a snap app, type <sudo snap refresh package>.

  5. To uninstall a package in snap, type <sudo snap remove package>.

  6. To make a list of all packages installed through snap, type <snap list >> snap.txt>.

Snap is still in the works, so some distros will be hesitant to adopt it, but the number of packages is increasing. Chances are that a package you use is installed in the new snap repository. Users who are also developers can benefit from snapcraft a new way for people to build their own snap apps.

To Learn More:

To Visit Snapcraft:

Bash Scripting Tutorial #3: User Input

User prompting can be done in at least a couple of different ways in bash. User prompting is useful when you are drafting a project or an important scripting job for someone else to use to successfully complete a task. When collecting user input, the input is placed in a variable(see last article) and then the value of the variable is implied when the variable is called later in the script. Multiple variables can be specified in the read line, this will allow for more than one answer to the script’s question. Some examples of situations where this would be useful would be when your client needs to sort and review large text files or spreadsheets by the information in the files. Linux has commands for sifting through and sorting data, but to appy it in a script, the script has to know what files to look through and what to do with said information. Another useful example is when creating a simple bash cli game to pass time at work when you should really be doing something but you’re just not feeling it. Also, this technique is useful when collecting and parsing information about a client or employee. A simple example would be the following:

echo “Enter your name”

read name

echo “Enter your birthday”

read birthday

echo “Hello” $name “your birthday is” $birthday

Where as name and birthday would be the information you entered when prompted. This is known as a sequence of STDIN and STDOUT or standard input and standard output(Input being what you entered and Output being what was pritned to the screen after all the information had been gathered). Another prime example would be merely using a read line like:

read -p “Enter a series of potential baby names”: name1 name2 name3 name4

cat $name1 >> babynames.txt

cat $name2 >> babynames.txt

cat $name3 >> babynames.txt

cat $name4 >> babynames.txt

This script will prompt the user for babynames that they are considering and will simply save them to a list of names in a file called babynames.txt. In this we are redirecting the STDOUT to another file rather than displaying them to the screen briefly before losing them. This, unfortunately only allows for four names. To add four new names or one name over multiple iterations, one would most likely want to use the first method in a loop(more about these later). Next we will be looking at if statements.

These guys have even more useful examples:

Bash Scripting Tutorial #2: Declaring Variables

When writing scripts, it is often important to use and declare variables. Declaring variables is super easy. There are multiple ways to declare them, but they each work the same way. One way that I often employ is to prompt users for their input, place said input in a variable, and then use the variable to tell the script what the user wants. By this method, I am using the variable as a wrapper for something else. Variables are great for projects in coding where you really don’t know what the output of the variable will be, but you know what you want the variable to do. Other commands can be used as value for variables and declaring a word or a number as a variable and giving it value allows the echo command to print that value to the screen when typing echo $variable. A good example of declaring and calling a variable would be opening a terminal and typing var1=$(cat filename | grep “RandomPattern”) then typing echo $var1. Typing echo with the variable name will result in the value of that variable being displayed. Filename is just an example as we didn’t actually call an actual file with anything inside it. Grep would have looked for the specified pattern in quotes and that would have been the value. Another example would be:


echo $num1

My output in this scenario would be the number one. Variables are often the first thing you learn in programming classes as these are used throughout whatever project you are trying to accomplish. While scripting languages are different from Java and or C, the idea is roughly the same. Across environments, these variables are declared and used in roughly the same ways. Python to Bash to Ruby, they all use them. A final example will be an excerpt from one of my own personal scripts.

echo “Enter the name of any software you’d like to install

read software

This example relies on the user to give input and then uses the read command to register that input as the value of the variable known as software. When the command continues it would run similar to this:

sudo pacman -S –noconfirm $software

Where the above command would install all software specified and stored in $software. Variables can encompass values that are classified as Integers, Strings and booleans. Booleans can be either True or False for the value. Unlike other variable types, these variables are best suited for a case where the outcome of a scenario is uncertain. The previous examples were of integer and string value respectively, but now it is time to see a True/False boolean variable in action. An example of a boolean variable would be this:

while True; do

some command



Another form of this would be:

find /etc/hosts.bak

while [ $? -gt 0 ]


sudo cp /etc/hosts /etc/hosts.bak



This form looks for the file /etc/hosts.bak. If the file is not found, in this case, if the scenario is false it will continue to create the file and then break out of the loop. If the value were True or “0”, It would have simply returned the file name in question. We will get further into this and While loops at a later time.

Bash Scripting Tutorial #1: INTRO

When performing command line tasks or (CLI) jobs in Linux, it can become tedious when there is a lot to do, for instance, working as an administrator for a small/medium/large company. Automation is very helpful when parsing large files or running multiple commands at once more than one time a day. Scripts are basically text documents that run a series of commands in succession of one another. Think of it as writing for a play. Scripts use the #! sign at the top, this is known as a shabang. The shabang alerts Bash that the following text document is a script and should be ran as a succession of lines as such. The environment comes after the shabang like so: #!/bin/env/ replacing env with the environment the script is to be read from. Most Bash ran scripts have the environment of shell. Python and Ruby use their own environments respectively.

Bash and Shell are not usually the same things. Bash or Bourne Again Shell handles lots of commands very differently to regular shell. Shell doesn’t do well with complex tasks so for this reason, most complex scripts are written with #!/bin/bash. There are plenty of ways to do different tasks within bash, some commands are more complex for a more complex need, however, other complex code is used to show off a coder’s skills. Writing code in any sense tells a bit about the one writing it. Their thought processes and so on. When a developer of an os sets certain scripts to be ran from the system’s back end, many of these scripts use Anacron as a scheduler and they use #!/bin/sh as the environment. These scripts are usually found in the /etc /cron.daily monthly or weekly folders. These scripts usually consist of one or two lines to do tasks like updating the local database and updating man databases or rotating logs around.

Bash is good for most needs, however, it is imperative to plan out your next script with the job in mind. What am I trying to accomplish? Will this deal with numbers or strings? How will this work automated? In the next few tutorials I will go over some basic syntax for everyday commands. I will even talk more about Systemd timers and scheduling.


As I stated previously, there is no shortage in software for Linux. Each task seems to have more than one really good application. Here I will go over 10 things I can’t do without/ or software I’ve read about and really am interested in. No specific order. I will follow up with a 10 Open Source Software I Hate article later.

  1. VIVALDI: There are a few good ones out there. Anyone can see that Linux isn’t exactly as limited as it used to be in this case. However, for my own use and purpose, Vivaldi is at the top of the list of browsers. It is built on Chromium and has the same Javascript engine, what makes it different is the interface. You can do almost anything with the interface. You can stack similar tabs, you can prioritize audio across tabs, you can hibernate background tabs to spare resources, something that takes a third party extension to do on other browsers.

  2. DELUGE: I get it, I’m using Transmission right now, but Deluge is by far the best Bittorrent client for Linux. It’s open source, cross platform, and has all the essentials you would need. Most of these “essentials” are in the form of extensions or plugins. These can be turned on pretty easily within the settings. These include; blocklists, bandwidth control scheduler, auto add, and more. Deluge has some similarities with Qbittorent, however it is a QT application. Deluge works better with gtk based desktops, at least for me.

  3. PAROLE: Media codecs are extensive these days, there is no doubt that VLC at least used to be better at playing DVD’s, however, nowadays, I can play most DVD’s on my Linux machine by using Parole. Parole also doesn’t have all the specific quarrels about Qt plugins as does VLC in Manjaro for instance. Parole started back into development not long ago, after it was unsure about the future of said application, its developers finally released a new stable update to the prized application that favors Xfce desktops over anything else. It’s very light weight even in comparison with VLC.

  4. GEANY: I have a lot of fun learning code. It’s not just the satisfaction of feeling like a hacker whilst typing away at my keyboard, it’s the feeling of solving a problem or otherwise making something more accessible. Whether I’m writing scripts for Linux, learning to write something basic in C or Java, even if I’m drafting something in HTML, it doesn’t hurt to have a good IDE/text editor that can handle the job. Geany(pronounced genie), is such an application. It highlights code and handles an array of programming and markup languages right out of the box. Another runner up would be Bluefish, but it’s more tailored to just HTML. Most people complain that the white background hurts their eyes, but no one realizes that there is a way to invert the colours, I will do a tutorial on that soon enough.

  1. BLEACHBIT: It’s true, cleanup in Linux isn’t an issue. While there are a few nifty utilities that do this for you, most are concerned with just how much these applications clean. There is a good reason to be nervous when using one of these applications, but most issues from running these are based on user error. An all around simple tool for cleaning cache and other debris from a multitude of applications on the system, Bleachbit is to Linux what Ccleaner is to Windows. Bleachbit is also cross platform. Bleachbit has many similarities with Ccleaner, such as its use of an ini file to tell it what it can and can not clean. Hacking of this file could result in larger lists of applications that you can safely clean, however, for regular users, the standard list is fine. Bleachbit can also shred and wipe free space clusters as well. For quick cleaning, this is my go to.

  2. HTOP: I prefer this even over my own system monitoring app for xfce on most occasions. I mainly like this app because it seems somewhat more accurate. It also tells me exactly what is using how much in a way that pwns the competition. Htop is a handly cli version of a system monitor program. It uses your terminal to display process and RAM information all in one compact and neatly organized window. Htop also allows you some control of applications, much like its graphical counterparts. While it is a bit more complicated for new users, using it is pretty straight forward. Most actions rely on the function keys.

  3. XSENSORS: While the xfce desktop, especially in Manjaro, has plenty of sensor information available to me with the addition of the goodies package, it just seems like a more efficient use of space to use Xsensors. Like other sensor apps, Xsensors uses lm-sensors to display CPU, GPU and other relevant temperature/voltage information depending upon your motherboard’s capabilities. Xsensors can easily be added to a keyboard shortcut. I prefer using F1 for this.

  4. BRASERO: Brasero is a simplistic disc burning utility for Linux. I chose this over Xfburn, because the interface is more modern.

  5. LIBREOFFICE: While neither Linux nor distribution specific, and while not the only office utility in Linux, I prefer this for its abundance of features and its integration with projects started on either Windows or Linux and in almost any setting. It has a good selection of fonts(more can be added by adding proprietary fonts to the system). It has a good spell-checker and Language database where more can be added. This relies heavily on Hunspell package being installed on the system. The default layout is what I am used to.

  6. PLUMA: While I already gave my favorite editor, this is an editor of a different breed altogether. Pluma is based on the Mate desktop project. While similar applications do exist, this is firmly Mate desktop and stable, lightweight, plentiful enough in features that I can get simple and quick edits finished fast. It is rather ironic that if the system I’m on didn’t already come with Mousepad, I’d definitely install this one first.

And there we have my top 10 loved applications for Linux, stay tuned for my top disappointing apps later. Thanks!


Linux is more secure than Windows and this is no secret. Many websites and companies alike all use Linux for their servers. Some Linux desktops are more stable than others that is true, but Linux servers have fewer attack surfaces thanks to the headless(lack of desktop) by default set up. Linux servers are sturdy as there is no graphical interfaces or other useless software getting in the way. Linux servers are, in my opinion, far superior to Windows servers. This is much to do with the lack of graphical tools as well as how the system is laid out by default. Linux files are placed in order on the drive, whereas a Windows machine is constantly in complete chaos. Always moving files around to and fro. In Linux, the file system is structured in a much more organized fashion.

Another way that Linux is more secure and better than Windows is that Linux uses separate accounts in Ubuntu by default. Manjaro is generally used in a similar way. Arch and Debian usually have it differently, one could argue this way is actually better, but it is more restrictive. In Debian, I have to log into root to install something, yet I can still use software from the standard user account. This isn’t remotely set up by default in Windows. This means that files and programs can only access what the user has access to while the user is running the program.

Linux security depends on the file being executable, this goes back to the filesystem in that files aren’t generally executable by default, this would cause memory and cpu issues, it would mean that any file could run rampant. Writing software for Linux is a different process than for Windows. In Windows, files are typically set to executable by default and this is by design. Windows is meant to be easily accessible, probably so much so that there are more viruses targeting it as a whole. File extensions also prohibit Windows viruses from running within Linux systems as Linux doesn’t have equivalent extensions. Linux itself isn’t the Operating System, contrary to everyone’s believe, it is the kernel, however, the kernel is audited every day by thousands/hundreds of thousands of people worldwide.

Most of the software on top of the Linux kernel is open-source. Open-source means that it is easily viewable as well. Every line of code can be read and accounted for. Many can’t read code, however, this doesn’t stop the many talented geeks who can. Generally this software is free, but regardless of the monetary status, Linux is open while Windows is behind closed doors.Windows is generally shut off from third party auditing. Also, unlike Windows, there is no telemetry collection either. This is something that is quickly becoming popular with computer users across the globe.

Lastly, Linux has exceptional tools for monitoring and intrusion detection(Tripwire, snort, etc). It’s no wonder why so many companies and website owners use it for their backend. Linux market share is rising on the desktop slightly as well. While it still has a way to go before it’s targeted so widely as Windows, it still shows promise, and what can be accomplished with open-source software. Moving forward, we should expect to see it prosper and bloom. Linux is the future.

Shameless Plug:


Task scheduling is an important and useful option built into both Windows and now Linux. For a while there, if you wanted to schedule tasks in Linux, it wasn’t as easy as Windows, technically, it still isn’t, however, it has grown leaps and bounds over what it used to be. With Linux you have a multitude of options, Cron and Anacron are just two such options. Today, we’ll focus mainly on these two, the similarities and differences. What makes each of them great, what might make one better than the other depending upon the individual circumstance. Task Scheduling in Windows was a difficult thing for me to learn as well, but it was pretty straight forward once I figured it out. In Linux, you have to either use the built-in Systemd timers or Cron/Anacron. Systemd is a really good thing to use if you are trying to be more precise with regards to timing, however, that is for a later article.

Both Cron and Ananacron use the system time, but what makes these two different is that Anacron runs 5 minutes after the computer is booted, then waits until the hour that the job was scheduled each day/week/month. Anacron is also picky about the shell environment used to perform system functions on such a basis. This could be a downside for new users who want to use third party scripts to handle maintenance and other things without them having to work at the system manually all the time. For those users, there is Cron, but it too takes a small amount of knowledge to set up. Crontab uses the system mail protocol which is the smtp server. Most things in Linux no longer require use of this tool as it is deprecated. This is not a requirement as many jobs can be forwarded to a log file as well from Cron.

Cron is a bit more simplistic to use, in Ubuntu, the Crontab file even has a basic outline of how to use the program. Crontab also offers a convenient Home Folder backup script in the Ubuntu version. Arch users are left with a clean file, they have to know the syntax to create their own jobs. One thing that Cron does have going for it though, it can use the Bash(Born Again Shell) or the regular shell. Cron has two modes, Root and User. The two separate modes will inact functions based on respective account privileges. If you are a standard user, you might find it difficult to run maintenance or backup tasks from Cron, whereas if you are a root, you will have an infinite amount of system resources to utilize at your disposal. Anacron uses the root account only to make changes to the system, such as; updates, apt-xapien, updatedb, mandb, logrotation.

Anacron has hours, days, weeks, months as its primary time settings, each of these is a separate folder that interacts on the system based on the time that anacron set it up to use when you installed your system. Both Crontab and Anacron are installed on most desktop systems and servers as of today, but this is not always the case, in Manjaro, Cronie has to be added in order to use the Crontab. In early versions of Cron, such as early Unix systems, Cron was thought of as a system service or Daemon, which was started via /etc/rc. At first, Cron didn’t have multi-user mode available on its own, it relied heavily on Unix systems for that, but in later versions, multi-user support was added.

Cron uses a pretty straightforward syntax. The syntax of Cron Table(Crontab), is as follows: ex. MM HH DOM M DOW Command to execute. In other words, you’d supply the minute(MM), followed by the hour(HH), followed by the day of month(DOM), followed by the month(M), followed by the day of the week(DOW). The minutes could be any number between 0 and 59, the hours could be anything between 0 and 23, the day of the month could be anything between 1 to 31, the month could be anything between 1 and 12, the day of the week could be any number between 0 and 6 with 7 being acceptable on some systems. The Command to execute could be anything the user writes, anything tailored for the user, and anything the system normally runs. For instance, if I wrote a script to update my hosts and named it hostsupdate, say I wanted to run it once, daily at around 5:30. I could simply run the following: 30 17 * * * /bin/bash /home/$USER/hostsupdate. Of course, at this time, that will run whatever is in the script, but if I wanted to check its work, I’d have to add the line cat /etc/hosts > hosts.log in the file. If I simply wanted to be alerted that it ran, I could type the cron job with > log1 and log1 would be created in my home directory. Another way that I haven’t tried, but might be possible, running two commands together in a crontab string. 30 17 * * * /bin/bash /home/$USER/hostsupdate && cat /etc /hosts > hosts.log. Note that for a script to update the hosts file, I need to run it using the BASH shell.

Depending upon your needs, Cron may be better than Anacron, however, if you just want to run a simple update procedure at the same time every day or week, Anacron is a great tool to use that is built into Linux systems already. Next week, I will probably be getting into /etc/rc and /etc/init.d along with system timers for Systemd. If you’d like to know more about these options for task scheduling in Linux, open a terminal and type man cron or man anacron. Also, there is a text based website that gives you similar information, link will be below.

Link for more:


As with the previous part of this topic, it’s good to do a routine clearing out of dust inside your pc. As to how often, it depends. If you ask two different people, you will get two different opinions. For some, this also depends on where you live. People living in Arizona typically have more dust in and around their homes, more cleaning is necessary and even then it still seeps into your machine. Dust and cigarette smoke are silent killers to computer hardware. Any smoke can be bad for a computer, it leaves a residue. Dust can also build up quick where there is a lot of smoke. It almost seems like the two go hand-in-hand.

But What can you do to remove such crap from your fans and heat sinks? Most computer owners use cans of compressed air religiously. These cans can be bought at your local Walmart or Office Depot. Many people used to take their machines to a neighborhood computer repair shop for the same maintenance rather than doing it themselves. As with all hardware, it’s good to eliminate as much dust as you can as dust is an insulator on electronics, but buying a 3 or 4 pack of canned air might save you upwards of 50 or 60 dollars. You’re welcome!

Last time, if you’ll remember, I talked about several commands that would clean the software you no longer wanted from your computer, some of these also removed unwanted files and debris. While this is probably not often needed on Linux machines as much as it is on Windows, it’s still good practice to free up space when you notice your browsers running slow or when you think your journal files might be corrupt(I’ll do an article about that later). I also added that I’d show you how to clean your /tmp directory, I haven’t forgotten, it’s just so super simple that you can do that with as little effort as rebooting or shutting down your system. This frees up cached RAM as well as removes any virtual files connected with it. Not only are these files not needed browsing data, this directory also contains copies of built or half built software in Linux. Windows doesn’t clean this directory automatically, but Linux does. In my scripts, there is a command which does this for you, but it’s almost redundant for me, because when I run the clean up function on my systems, I quite often reboot anyway.

With all that introduction out of the way, it’s time to move on to the actual cleaning. I’ll do these in steps so you don’t get lost.


  1. Canned air

  2. Microfiber cloth

  3. water gel or 91 percent alcohol

  4. kleenex

  5. Paintbrush


  1. First shutdown the device, this should be apparent by now, but electricity could kill you. Do this first

  2. Open the case, make sure to press the power button a few times to dissipate any remaining energy. When fooling with computer inards, it’s also best to touch the metal on the case first, but for this part you will hardly need it.

  3. Grab a can of compressed air(Duster). This should have a small straw that came with it, place the straw into the tiny hole on the nozzle. This sometimes likes to fall out so you’ve been warned.

  4. Hold the can of compressed air up to the cpu heatsink and fan, but don’t let it touch. Take a small pen or a finger and rest it between the fan blades so that the fan doesn’t spin. Spinning the fan the wrong way can cause damage to the fan.

  5. Now squeeze the trigger in short repetitive bursts. To knock dust free from the ridges of the heatsink, you may need to come back later with a small brush or something similar to try and clean this if it is still caked on. Be gentle with this method, some heatsink-fan combos do come apart, check to see if yours will.

  6. Next move to the power supply and try to blow from the inside out first. Use the same repetitive blasts as before, then change it up with longer ones. Then move to the back of the system and if you can, try to place a pen between the fan blades to keep it from spinning too much. This will force any other dust back into the system, so we will need to blow the case out further.

  7. Now move to the case fan, blow out both ends, then take a paint brush and try to sweep the remaining dust off of the blades as much as possible.

  8. Now blow any and all hard drive bays free of dust, also try to focus in on RAM modules, there’s not a lot you can do when they’re in their respective slots, but get as much as you can. This will make our job a little easier later.

  9. Now just blow randomly throughout the case, if your case has a rather closed front bezel on it, take that off and wipe/brush any ventilation holes and usb ports.

  10. Now it’s time to wipe out the exposed parts of the case itself and clean the ventilation holes on the cover. To do this I use a microfiber cloth with water gel, the same substance I use to clean my monitor, or I use a thin napkin or kleenex with 91 percent alcohol. The Alcohol doubles up as a solvent to remove thermal paste should you wish to replace the cooler or upgrade the CPU. It’s important to note that you won’t get every bit of dust with this part of cleaning, but you will get a majority of any dust that could potentially clog up your fans later.

  11. Another good, but optional step would be to pull the individual RAM modules, taking note of where each one goes, or pull them one at a time and replace them, this can help remove dirt. You will want to take a paint brush and gently brush the modules. It’s not often recommended that you touch the pins, however, it’s impossible to clean off the modules without touching them a little, this will clean any debris and blowing out the slot that the RAM was locked into is another good idea. RAM produces heat as well as anything else, so dust is a no-no.

As a followup, take the paint brush and brush gently over the heatsink-fan combo, the case fan, power suppy vents, vents on the case, and any pci cards protruding from the motherboard. This will help take any loosen dust that wasn’t removed with the air off of the surface. This is also good for getting along the edges of the fan blades. It’s sometimes good to blow out the machine every three months or so, but this is to be more thorough. Now that your computer is clean, you can hook up the cables and try to boot the machine. If everything boots fine, that’s it. It’s also possible to blow out the dust with a vacuum cleaner or a leaf blower, this is usually advised against because of the remote possibility that you could create a static charge against the motherboard with this method, however, I have never experienced this issue. Not once have I had to replace a motherboard with this method. That said, using the can air might be the better solution.

Link to scripts:


Spring is almost upon us again. With spring comes the usual yearly cleaning that just gets so deep that it doesn’t happen any other time of year. The same thing should go for your computer. Over the year we’ve tested new software, saved copious amounts of web cache, stored various amounts of pictures and other files to the hard disk, on our Linux machines, we probably only booted our computers once anyway and so there are countless numbers of kernels and other system packages that maybe haven’t been applied. There is also the developer’s computer that may have all these extra files laying around from where he tried to solve a specific problem in a program and had to try various scenarios to get around it… No? Must be me then. All these things may or may not be needed now and so it’s a good idea to clear those out of the way to make room for new ones. In this, I stress the idea of making occasional home directory backups as well as a few of the commands that I use to clean my system, this is part 1, part 2 will deal more with cleaning inside the case. If you’re squeemish, that one may not be for you.


This step is quite simple.You first need to backup any data you don’t want to risk losing. This includes family photos, beats, personal documents and pretty much anything found in your home folder. To create a backup of your home directory just run the following:

sudo rsync -aAXv –delete –exclude={“/home/*/.cache”,”/home/*/.thumbnails”,”/home/*/.local/share/Trash”} ~/$USER/ path to backup directory

This should back up everything except the cache folder, the thumbnails folder and the trash folder. Pretty straight forward and then you’re done.


When cleaning your system, it’s important to start with the cached data and thumbnails that are not important to the system at all. The .cache folder is usually found in the home directory. This could account for upwards of 1 to 50 gigs depending on when you actually cleaned it out last. To clean this debris, we use the following commands:

sudo rm -r ~/$USER/.cache/*

sudo rm -r ~/$USER/.thumbnails/*

This will clean most of the junk data that we no longer need by itself. This does not clean browser history or cookies, if you want to clean those stay tuned. Next step will take care of the trash folder that is also located in the home directory.


Cleaning trash files is another important step, sometimes we can have trash that’s over a year old because we forget to clean it and the system never says anything, often it goes unnoticed. Often times when we “delete” a file, it gets placed there anyway. Removing this could save you a boatload of space. To clean this we use the following command:

sudo rm -r ~/.local/share/Trash/*

This will clean all relevant data in your home’s trash disposal. This is a great way to free up some potential gigs, but that’s not all, we have a few other steps on the way.


In this step, we try to clean out package cache. This is one of the biggest places to retain data and grow to enormous sizes. This is the cache that stores older versions of software. Most packages get built in the /tmp directory so we will clean there next. In this step we go over possible solutions for both Ubuntu-based and Arch-based distributions. This is dependent upon your current package management solution. To clean this area we use the following commands:


sudo apt-get autoremove

sudo apt-get autoclean

sudo apt-get clean

While this next one has multiple options, we will give you and example of each.


sudo pacman -rvk3

#This removes all but the latest three versions

sudo pacman -Sc

#This removes all but the latest version

sudo pacman -Scc

#This removes all versions

I have scripts that will handle this for you, but it’s important to learn which ones to use and when. For the most part, in Arch you probably only want to leave the latest three versions as this allows you to revert back later, but this isn’t always practical as package cache can take a lot of disk space, so when you are spring cleaning, I recommend using the second option. But this is still up to you.


When you’re cleaning the package cache anyway, it’s also important to clear orphaned packages. These are packages that are no longer connected to any dependencies and are usually no longer needed. These can add up if you’ve never cleaned them before. Ubuntu has a separate application you can install to do this, I will go over that here, Manjaro’s package manager can do that for you if you run a certain command. The easiest way to do this in Ubuntu would be to install gtkorphan which my scripts will include in the install list shortly. To do this in Manjaro and any distribution relying on pamac simply run the following:

sudo pacman -Rs –noconfirm $(pacman -Qqdt)

That is that!


This is the final step on this list, this will scan for broken symlinks and what I call shadow files in the home directory. These files and soft links are almost never needed. To do this run the following commands:

find -xtype l -delete

find $HOME -type f -name “*~” -print -exec rm {} \;

Stay tuned for part 2 of this cleanup tutorial, this concludes part 1.

Link to scripts:


A lot of people assume that Linux is a technical operating system, too technical for average users, however, since the earlier 2000’s it’s safe to say that most modern distributions and desktop environments have improved and are becoming more user friendly. All that said, linux users still should know a bit about the terminal and how much faster it can make certain day to day tasks. In this article, I’ll cover a small fraction of the useful commands that a user might want to use every day to gain system information, find out where in the file system they are located, update their system, manage services, do simple cleaning of cache resources and so on. Linux is not your regular Windows operating system, but with a basic knowledge, most users can find a way to make it work for them. Linux is a Unix like operating system, this means it’s closer related to Mac OS, however, unlike Mac, there are no hidden backdoors.

When choosing an operating system, most new linux users opt for Ubuntu, but a majority of the following commands can be used in all distributions. I’m running Manjaro currently, however, you can find a similar command or use the same one in your distribution to find out more information. Linux separates the root and standard user accounts by default, unless you’re in debian, you kind of have to add a separate account in Debian in which you will have to log in each time you want to upgrade or install software. That’s the eaasiest solution for me, but it’s a good one. Most distributions do add the regular user to the wheel if you use the same password for both accounts. By doing this you are closer to the root user, you still have to add a password to use the root account, but you do get added security that random programs can’t automatically run as root if ran by the regular user. Windows has a similar option for this to Debian. If Windows users recall, since at least XP they’ve been urged by many tech savvy friends to use separate accounts. This was for security.

Linux hides nothing and many distributions give you a leg up with either gui or cli applications to handle the heavy load for you. Linux also has man pages to help when searching for information on a command. The following commands will give you some examples and act as a starting point for anyone on any distribution to learn more. I am in no way an expert, however, these are rudimentary level commands so..



uname -r

Gathers information about kernel, can have different flags such as a or n see man uname.


Display the current working directory.


Change directory. Have to follow with a slash and/or just the directory name.


Which user you are.


Displays network and port information, can be replaced with ss.


List the contents of the directory.


List block devices connected to system.

ifconfig, iwconfig

Similar to Windows ipconfig.


List contents of file.


Access the man pages.


Updates the file index, usually done automatically.


Add a file name or type plus directory to locate file.


Clears the terminal screen.


When program is passed in argument, it shows the location of the program.

In addition to these basic commands, there are others that are good to know. The following lists a few important string style commands that use the root account to function. Most of these are either for maintenance or updating the system.

Command (Str)


sudo apt-get update && sudo apt-get dist-upgrade -yy

Used in conjunction to update a system and it’s repositories in Ubuntu. Will not work in Arch(More on this soon). The &’s let bash know that you want to run the two together.

sudo rm -r

Is often used to clean a directory of all contents including files and folders.

sudo systemctl reboot

On modern systems, this reboots the machine.

sudo systemctl daemon-reload

Updates the status of systemd without rebooting.

sudo ufw status, enable, reload, disable

Controls the working state of the simplified firewall that interfaces with the underlying iptables.

sudo systemctl restart service

Used to control service states on systemd systems.

sudo rsync $dir1 $dir2

Copies contents of directory 1 to directory 2.

As you can see, once you know a few of these commands, Linux is no longer that scary. You can use this as a reference or cheat sheet for as long as you need. These are just a few of the many commands for Linux. Once you have a grasp of these, you could carry on with business without learning too many others. This is just a good place to start. I wager that most users would want to learn more though. Got a favorite command? Let me know!


For those who want a little more learning, here’s a few other commands that might be useful!



free -h

Shows memory usage in human readable format.

df -h

Shows disk space usage in human readable format.


Shows uptime for user in number of days hrs and minutes. Also shows current load average.