Featured

First blog post

This is the post excerpt.

Advertisements

Greetings, my fellow Linux users and devotees!

I’ve dabbled in the past with a blog, but never was serious about writing on a regular basis. I’d like to change that, starting now. I’ve begun to think more deeply about a number of things, including Linux and life, and I’d like to share them with the world.

I know, I know: This is yet another “me-too” blog with yet another person who wishes to spew this thoughts onto an uncaring world. Not everyone thinks the same way, and I’m hoping that what I have to say is important to at least one person; if I’ve accomplished that, I’ve done what I set out to do.

Without further ado, let us begin the journey!!

 

Learning experience

Today, I did something that I, in all my 20+ years of building computers, have never done before: I installed a high-end cooler.

Admittedly, I have never seen the need for such cooling, as I don’t believe in overclocking, which is the market that most aftermarket coolers target. Stock coolers, in my experience, have done the job for me in a most satisfactory manner, albeit with a slight bit of noise.

So what has changed? Although my system’s technology is dated, I find myself needing to hold onto it for a bit longer than the geek in me would like, and that is because I am getting ready to graduate from university, and I need to think about how I’m going to pay back the government. That means no Ryzen, no matter how much I lust for it. Besides which, Ryzen compatibility with Linux is not quite there yet, let alone having the money to pay for what will essentially be a whole system upgrade: CPU, motherboard, and memory. Ryzen uses an AM4 socket, which in turn requires an AM4 mobo, and since the platform uses DDR4, I cannot reuse my old RAM (alas!)

That means my FX-8370, while still a beast of a processor, at least for what I use it for, needs to be as cool as possible while under load, because I see myself in possession of it for some time, possibly 5 years or more.

The stock AMD cooler, the Wraith, while a competent cooler in itself, does not keep the processor as cool as I would like. Processor temps under load (kernel compilation) was approaching 60 degrees Celsius. That is a bit too high for my comfort. My system fans were recycled from my old case, and when I replaced those with proper 140mm fans, the temps went down 2 degrees, which is a distinct improvement, but still way too high.

So, researching I went. I had heard of the German manufacturer be quiet!, but I didn’t think much of it at the time, having no need of any of their products. That has since changed.

I went and perused YouTube videos, and came across some reviews of the Dark Rock 3 cooler, and it intrigued me enough to search out some websites to check temps, and it looked like the temps dropped 12-20 degrees C compared to the stock cooler. That was impressive enough for me to invest in the cooler. As it happened, Newegg ran a sale on the cooler; $15 off was enough of an incentive for me to purchase one.

Why would I skip the Cooler Master EVO 212 which, judging by the reviews, is darn near perfect, as such things go, and much cheaper to boot?

Firstly, I think the cooler looks much better, and secondly, it is rated for a long service life, which gives me assurance that it will give good return for investment.

This is a MASSIVE cooler! It’s a good thing I opted for a full-tower case (Fractal Design Define XL R2); otherwise, I wouldn’t have had enough clearance for my side door.

The learning  experience

be quiet!’s documentation leaves much to be desired. It is quite apparent that the majority of people who purchase this cooler do so for an Intel processor, and while the mounting hardware is included for AMD sockets, the included instructions only cover Intel mounting, and that poorly. What I should have done ( 20/20 hindsight, I know!) was to download the AMD documentation (which I did on my phone after my system was disassembled), which is available on be quiet!’s website.

Turns out that a lot of the included hardware was not needed for AMD’s AM3 socket; all that I needed was the backplate, the four mounting screws, and the clamps. After applying thermal paste in 2 rice-sized lines on the CPU, I had to orient the motherboard vertically and hold the cooler with one hand after settling the mounting bracket, which I had to try at least 3 times before I got it right (live and learn, I guess!) on the four screws.

Screwing the cooler in was an experience in itself. You can’t just fasten one screw completely before moving onto the next one; you have to do them a little at a time before moving to the next one. After about 3 rounds of fastening, I was able to settle the heatsink on the CPU securely.

The acid test

I always feel nervous about disassembling my system in this manner, because there is always the off chance that you might miss a connection when re-assembling the components. Fortunately, that was not the case. The only thing which occurred was I inadvertently disconnected the power connector from my primary SSD, which I discovered after turning on my system ( a miracle in itself) and discovering it was booting Windows from my secondary SSD. This, of course, was after I had bolted on the side door. Always a lovely time to discover this.

After reconnecting the power, and adjusting the BIOS options to boot from the primary SSD, I was ultimately successful in getting the system to boot. Now, moving on to the real acid test: Would the load temps be significantly cooler under load? Only one way to find out; a kernel compilation was in order.

Since there was no new kernel release, I went into the 4.12.4 directory (no sense in messing with the current running kernel’s source directory!) and issued a make clean, to ensure a fresh compile (This clears all previously compiled files, to my understanding).

With much trepidation, I issued the command: make -j 9 bzImage. This would stress-test my heatsink sufficiently. To my great delight, the top temp was no more than 45 degrees, 13 degrees cooler than my old cooler. Color me pleased!

So what have I learned?

Just because you have 20 years of experience, does not mean that there are not new frontiers to explore, and this was a definite dip into unknown waters. Didn’t turn out too bad: could’ve been a lot worse (but thankful it wasn’t)

 

Keyboarding and the Command Line

My first experience with computers was with the TRS-80 Model I, when I was in junior high, in 1984. Back in those days, familiarity with the keyboard was a must, when GUIs were an unknown beast, and knowing the commands to make your computer do what you want it to do was considered elitist. I took typing class to increase my typing proficiency; alas, I am nowhere near 100+ wpm. However, at the level I’m at (~45 wpm), I’m able to do most of the things I want to do without too much trouble, like blogging, and typing at the command line.

When I took a business class in high school, I first came across the One True Keyboard, the Model M. For those not familiar with computer history, IBM introduced this keyboard in 1985, and it came standard with PS/2 computers. While the PS/2 line became extinct shortly thereafter, its keyboard became the progenitor of modern keyboards, and rightly so. The layout was meant to help typists familiar with the IBM Selectric line of typewriters transition to computers and not have such a learning curve.

While I am a proud owner of an original Model M, I realize that modern keyboards have their place, too. One such genre of keyboards is the mechanical keyboard. After the 90s spate of rubber dome keyboards, which are mushy pieces of junk, the mechanical keyboard has a satisfying feel. Being a big fan of the clicky sound of the buckling spring switches of the Model M, I feel strongly about having the tactile feedback of pressing a key. For me at least, my typing is greatly improved by both hearing and feeling when a key has made contact, and seeing the character I typed appear on the screen at the same time.

To that end, I recently purchased a Corsair K70 LUX Gaming Keyboard, and it rocks the Cherry MX Blue switches which has the tactile bump and click, making typing on these keyboards a pleasure.

I’ve been a long-time fan of Coding Horror. I was inspired by the owner of the website, Jeff Atwood’s post We are typists first, programmers second, where he had this to say about the necessity of learning how to type efficiently:

We are typists first, and programmers second. It’s very difficult for me to take another programmer seriously when I see them using the hunt and peck typing techniques.

…there is nothing more fundamental in programming than the ability to efficiently express yourself through typing. Note that I said “efficiently” not “perfectly”. This is about reasonable competency at a core programming discipline.

I couldn’t have said it better myself. I am aspiring to become a sysadmin, and while one is not required to program applications to do one’s job, one will, at a minimum, be required to write scripts to do certain tasks. Typing proficiency is required to make writing the script less time-consuming.

If you run Linux on your computer(s), and are somewhat a geek, then you know that familiarity with the command line is a must. There is relatively little that can be accomplished with merely a mouse click; specifying exactly what you want your computer to do requires typing on the command line.

I felt compelled to make this post because I’m seeing a complacency and jadedness become prevalent with respect to the ancient art of keyboarding. Using a keyboard will remain relevant for years to come. Using a mouse alone for computing is not enough. People who know how to leverage the command line for their needs know what I mean when I say that Linux makes getting “down and dirty” with the computer possible, and you are able to make your computer do things which are difficult, if not impossible, with Windows.

Long live the keyboard!

If it ain’t broke…

What happened to me today as I was messing around on my system is a perfect example of the wisdom of the old saying. For example…

I had a nostalgic feel for my Unicomp Model M keyboard, so I thought I’d re-purpose my Corsair Vengeance K65 Gaming Keyboard to my laptop. Keep in mind, however, that I had a perfectly working bluetooth setup, and that, along with my keyboard and mouse, worked through USB. My Unicomp, however, has a legacy PS/2 connector. Which, in theory, shouldn’t have been a problem. Just substitute the Unicomp for the Corsair. Right? WRONG!! Using a legacy connector turned out to introduce all sorts of lag to my system, which has communication layers like this: Audio–>PulseAudio–>Bluetooth.

In order for this system to work flawlessly, the same bus (USB) has to be used; legacy connections are a NO-NO. This has to do with the way Linux routes communications from the system peripherals. From what I’ve been able to gather, the PS/2 connector introduces major amounts of lag; I’ve not measured, but if I start a song via VLC, and then press stop, it will take 1-3 seconds for the sound to actually stop. Not an acceptable solution.

I only discovered this by logical reasoning, after messing around with certain system files, files which previously had worked without issue. When I met with failure time and time again, I sat back and thought about it. Then it made perfect sense.

Only after re-swapping the Corsair for the Unicomp, did my system return to its previously perfect working order. This was a humiliating lesson for me, one which I, in retrospect, should have known was going to happen, but frequently as happens with us humans, doesn’t become painfully clear until after the lesson has been taught.

Moral of the story: Don’t mess with your system, even if you think you have good reason to do so, unless first you assess why your system works the way it does, and then make the adjustment only if you’re sure the adjustment will not bork your system’s performance.

It works for me!

Originally, this post was going to be about the non-viability of Windows as an OS geared towards the people, instead having been created to serve the interests of a mega-corporation, but I think instead that a constructive post is called for instead. One that shows why people run the OS they do.

I’ve begun to realize recently that people get into religious-type wars over the silliest things, like vehicles, and sizes of bodily appendages, and what-have-you. The area I’d like to concentrate on is the operating system that people run, and why they choose to do so.

Operating systems are, by their very nature, facilitators. As non-intuitive as it sounds, OSs were not intended to help users; they were designed to help programmers cut down on the amount of work they would have to do to write an application.

As Neil Stephenson puts it:

Operating systems are not strictly necessary. There is no reason why a sufficiently dedicated coder could not start from nothing with every project and write fresh code to handle such basic, low-level operations as controlling the read/write heads on the disk drives and lighting up pixels on the screen. The very first computers had to be programmed in this way. But since nearly every program needs to carry out those same basic operations, this approach would lead to vast duplication of effort.

Nothing is more disagreeable to the hacker than duplication of effort. The first and most important mental habit that people develop when they learn how to write computer programs is to generalize, generalize, generalize. To make their code as modular and flexible as possible, breaking large problems down into small subroutines that can be used over and over again in different contexts. Consequently, the development of operating systems, despite being technically unnecessary, was inevitable. Because at its heart, an operating system is nothing more than a library containing the most commonly used code, written once (and hopefully written well) and then made available to every coder who needs it.

So when people argue for their favorite OS, whether that be MacOS, Windows, any distro of Linux, *BSD, or whatever else less-known OS that runs on the x86 platform, they’re actually arguing over their favorite flavor of facilitator. Much like the discussion of different flavors of ice cream, such discussions are ultimately pointless in the grand scheme of things. Who cares if your favorite OS is this or that? To some, however, it is vitally important that others agree with their choice; it is miserable to realize that you are the only one (or so you think) that has made the choice of OS you have. And so you spew vociferously vituperous vitriol towards anybody who dares to disagree with your choice.

Let this be a word to the wise: if your OS runs what applications you want, be happy, and let everyone else be happy with their choice. Instead of criticizing, help others with their issues, if it happens to fall into your area of expertise. If it differs from yours, let it be (As Paul McCartney would have it).

Then everybody will be just a wee step closer to being at peace with each other. Instead of being divisive over our differences, let us unite with our similarities, because we have much more of the latter than the former.

Adventure, Redux

I run an RSS reader (QuiteRSS, for the curious) and have Eric S. Raymond’s blog as one of my feeds. Today’s update proved intellectually interesting, which may seem odd considering the subject today is a program designed for hours of wasted time, e.g., having lots of fun. More specifically, it has to do with a text-based game written in the 70s, but famous for having spawned a whole gaming genre, as well as being still eminently playable; not a bad legacy, given that most games have a play-ability lifetime of maybe 5 years.

In his blog post, ESR talks about the porting of the code, and how Will Crowther (the original programmer), and Don Woods (later enhanced the game) had to use the primitive programming tools available to them at the time, as well as the hardware. Needless to say, they pulled it off admirably. However, I want to talk about the cultural impact this game has had, and the personal motivation for Crowther to write the game in the first place.

I’ve heard much mention of this game, but never has anyone mentioned the story behind the game. Being a curious sort of fellow, I Googled Crowther. The Wikipedia article that came up was most illuminating.

Seems that Mr. Crowther was going through some difficult times, in the middle of a divorce from his first wife, and along with her, being an avid caver as well as a fan of the Dungeons and Dragons board game, decided to write a game that his daughters, whom he missed terribly, would have something to play to pass the time. Needless to say, the game was a hit, for more than just his daughters.

Being a talented programmer, he used all his formidable skills into making a game that would encompass his adventures in caving. Other people, not surprisingly, given the realistic aspect of the game, were drawn into it as well, and it has made for a number of other games based on Adventure, most notably Zork.

Not very many things endure for a long time; it’s worth noting that Adventure is not much younger than another entity known for its extraordinarily long lifetime (and in its current incarnations, doesn’t seem to become extinct anytime soon). I, of course, am referring to Unix. IMO, what these two have in common that contributed to their longevity was their ability to appeal to a wide variety of people; in other words, it met them where they were, and they saw value in it and invested time, talent, energy, enthusiasm, and yes, even some money, to improving them (or just customizing it to their tastes).

That’s why moddable games are so popular; game designers who are smart realize that for all of the talent they possess, they might not have the breadth of imagination that their fan base have, some of whom are artists and programmers. Games that can be customized and personalized will endure for a much longer period of time than those that are sealed; that is, the game is the vision of the designers, and is likely to remain so. Such games will have a limited appeal and then be forgotten. Such a waste, really, to invest such effort into a game, to be forgotten after such a short time.

It’s worth noting other OSs and games that fall into roughly the same category.

On the operating system side, not many OSs have a long lifetime; certainly not as long as Unix’s. However, their innovative features and ability to appeal to so many people are a factor in their popularity. Notable examples are AmigaOS and BeOS. Efforts are being made to keep these old OSs relevant.

OS/2 is a curious beast. Born out of a collaboration in the 80s between IBM and Microsoft, it never caught on with the masses because of its difficulty of installation and usage. However, it’s never quite died out either. Other companies are updating incarnations of OS/2 to make sure this super-stable OS doesn’t die out.

On the games side, all of id Software’s games, ones written by one John Carmack, are meant to be modded. Only time will tell, but as of present, only Quake I seems to have retained popularity. The other games are moddable, but for one reason or another, don’t have widespread appeal.

Let this be a lesson to all of us. When creating something valuable, make sure it appeals to a long-lasting virtue, and not merely to temporal events. Such efforts are doomed, and any venture worth someone’s talents deserves recognition, at least, if not a bit of popularity.

Auto-connecting Bluetooth via PulseAudio

NOTE: I think I’ve found a way to make sure this blog has regular postings; to document solutions I’ve discovered, so as not to forget it next time around.

Without further ado, here goes nothing:

I recently bought a Bluetooth stereo receiver, a Denon AVR-S510BT. As it’s intended to be an entry-level receiver, there’s not much in the way of included feature(s), like support for popular streaming services such as Pandora or Spotify, or even hardware connections like Ethernet; it only has Bluetooth connectivity, which for my purposes is perfect. I spend most of my time in my apartment, using a part of it for my home office. The distances involved are well within Bluetooth range (~30 ft). Testing has borne this out; connections are strong, with nary a dropout. For someone who really enjoys his music, once I get into a song, there’s nothing more frustrating than a dropout, or worse, a series of them.

My receiver is programmed to auto-connect to a bluetooth device upon startup, as the mode specified is Bluetooth. I have it automatically paired (trusted & authorized) to my Slackware computer. For playing my music, I use VLC, a handy, versatile player that has never failed me yet.

The problem was that for my receiver to successfully connect, I had to manually right-click the bluetooth icon in the system tray to pull up the app, then I would hit the ‘Connect’ button, then the receiver would proceed to connect. Which is all well and good, but this is something I would much rather see the computer automatically accomplish instead of myself. There’s a good reason that there’s a saying that goes, ‘Google is your best friend’.

Because it is.

One search, and I was able to connect to a forum site which gave me the solution. One that was rather simple; add a single line to a file.

I added the line:

load-module module-switch-on-connect

to the file /etc/pulse/default.pa.

Now for the acid test.

I saved the changes to the file, and shut off my receiver. Next, I rebooted my computer. Once my computer came back up, I turned my receiver back on. Like clockwork, it automatically connected without the slightest intervention from yours truly.

Sometimes, it pays to be persistent; one never knows what solutions lurk out there in the dark byways of the Information Superhighway unless one looks and doesn’t give up easily.