Thursday 24 January 2013

Installing Ubuntu 12.04 LTS on a HP 6560b notebook

HP notebooks seem to be hostile to Linux for some reason. From what I can gather, some of the HP utilities write data to track 0 of the boot HDD. As far as Windows is concerned, as long as sector 0 has been spared it doesn't care what you write there. Linux, however uses GRUB (GRand Unified Bootloader) which only has its stage 1 loader located in sector 0. Applications (other than boot loaders) aren't supposed to write to track 0, however this is the stuff of another article.

The challenge then is to make the HP notebook dual bootable; make sure as many devices function under Linux as possible and (preferably) virtualise the Windows partition within Linux so that it doesn't become an either/or choice.

Partitions

The first hurdle is the extra partitions that HP create: the HP_Recovery drive and the HP_Tools drive. HP also has a bootable "boot" partition, meaning a total of four primary partitions! You can only have four primary partitions, which means that even blowing away one partition, the fourth would have to be an extended partition. So, one or both of these partitions have to go. Copy the files on the HP_Tools partitions to C:\HP_Tools and blow it away. You gain an immediate 5GB of space there.

Choosing to lose the HP_Recovery partition is a little more difficult. However the gains are worth it. You get back 15.3GB of disk space plus you regain continuity of the file system.

My preferred partitioning setup for a dual boot system is:

P1: NTFS (Windows)
P2: ext4 (Linux boot partition - /boot)
P3: Extended Partition
P4: Unused
EP5: FAT32
EP6: Linux swap partition
EP7: Linux LVM2 partition

The LVM partition is then allocated accordingly to the following mount points:

/ - unlimited
/home - unlimited
/var - limited
/tmp - limited
/sys - limited

This is fairly convoluted, but it fits my style of thinking. If you want to create a single root partition for everything then go for it.

In practice, I have setup the partitions as follows:


P1: NTFS (Boot)
P2: NTFS (Windows)
P3: Extended Partition
P4: ext4 (/)
EP5: Linux Swap



Not ideal and no LVM, however it does put the swap file at the end of the disk and still allows me to dual-boot. I decided not to use LVM because Ubuntu does not offer it as a native option (unlike CentOS and SuSe) and this is a notebook and not a server - I should be able to manage a contiguous file system on it. If I need more space I can always blow away the two NTFS partitions and use that space for /data.

Ubuntu Setup

After partitioning, the setup continues. I run an update and begin installing the additional packages from the software centre. This is a breeze. I find it amazing that Linux has gone from making it difficult to install software to being a complete breeze. Connecting to Ubuntu One cloud and all the files from my previous notebook are restored to this one. I also setup my login for Dropbox and synchronise with conduit. Setup time is quite quick.



I struggle with the Unity interface for a while before switching to Gnome with the Gnome Panel instead of the Gnome Shell. This makes it easy for me to enable compiz for a (real) 3D desktop. My real gripe with 12.04 (and Gnome 3.x and Unity) is that so many many thinks that used to "just work" are now broken. Some of these could probably be fixed easily if the packages were properly maintained and ported to GTK3. I can see why Canonical decided to pursue the Unity interface - it makes a lot of sense in light of the insane direction the Gnome project is going. Other distros have tried to keep Gnome and provide their own customisations: Linux Mint replaces the shell with the Cinnamon interface.

One of the packages I struggled with is nanny. It seems that GTK3 really breaks this app. It is listed in the 12.04 software respository, however it fails to appear on the Unity dock. This was one of my reasons for moving to Gnome, however even that didn't fix things fully. There is a PPA listed privately to "fix" nanny, however although it allows the nanny-admin-console to run, you cannot make any changes. Ubuntu need to remove nanny from the software centre.

The other struggle I had was with virtualbox. After installing it I realised I hadn't enabled VTx in the BIOS. However, even after making the changes I couldn't run a 64 bit virtualised OS. I installed Vmware workstation and had no problem with it. Since virtualbox bolts into the kernel, I uninstalled virtualbox and re-installed. This time I had no problem with virtualised 64 bit.

Virtualising Windows 7

I tried a variety of methods of p2v'ing the Windows 7 partitions. Most of the methods I found were based on Windows XP, however I also suspect that installing Linux first might have compromised my efforts. Success was achieved by installing the latest version of vmware converter on Windows 7 and running it in standalone mode, but creating a vmdk for vmware workstation 8 on an external HDD. I was then able to create a virtualbox machine that will run the p2v'd workstation. Here's the full procedure:

1) Boot Windows 7. Download vmware converter and install on the Windows 7 machine to be converted.

2) Run the converter and create the vmdk on an external HDD. In my case 64GB was required and it took several hours. Make sure that as part of the conversion process you disable all hardware services - particularly the HP services. Also change the controller emulation to LSI SCSI. Note that if the external drive is FAT32, it will divide the vmdk into chunks.

3) Boot to Linux. Create a new guest OS in virtualbox and connect to the vmdk on the external drive, changing from SATA to SCSI. Edit the settings to make the RAM at least 1024MB. Enable PAE/NX, VT-x and IO APIC. Change the display settings to 128MB of VRAM and enable 3D and 2D acceleration. Change the network adapter from NAT to bridged.

4) Start the Guest OS, allow it to install all the drivers and then reboot. Install guest additions and boot again.

I've tested the vmdk running in both vmware workstation and virtualbox - both work fine. If you don't plan on using virtualbox you can create the vmdk for version 9. I turn off all of the unnecessary stuff in WIn7 to leave a vanilla shell running in 800x600 mode. Since I plan on virtualising the apps, the desktop is unnecessary.


The choice between vmware workstation and virtualbox is a difficult one. Virtualbox is free, but with vmware workstation you can virtualise the applications on the Linux desktop as though they were just another app. The advantages of this are too great to simply ignore. I've also found in testing the two (on the same vm) that vmware workstation is much less memory hungry - only taking the RAM it currently needs. That doesn't seem to be the case with virtualbox as the above trace shows.

Crossover

The last thing to install is crossover. This is the only commercial application I own. It is simply invaluable if you want to run a Windows app on a Linux desktop without emulation. I use this mainly to run Visio - an app for which there is no real competitor.

Conclusion

I now have my workstation working pretty much how I'd like it to be. I can do everything with it now that I used to plus I have access to Windows 7 whenever I need it. The next few weeks should bed down the installation.

Tuesday 15 January 2013

How I spent my day (Old Blog)

This article is one I wrote nearly ten years ago for my old blog. It was originally written in three parts and explains the origin of may adage "I'd rather work on a ten minute job than a five minute one. A ten minute jobs only takes ten minutes to complete. A five minute job takes a least two hours."

It's interesting to note how terminology and technology have changed in ten years. Flash drives were commonly referred to as "pen drives" and dial-up modems are almost unheard of now. Most of the issues discussed with installing Linux are now non existent - back then you really had to know what you were doing to work with Linux, now Linux is so easy even an MCSE can work with it.

Tuesday 8 January 2013

A lazy sysadmin is a good sysadmin

As the sysadmin, it is your job to keep the IT systems running smoothly. If everything is running, they already know. If it isn't, they aren't interested in your petty excuses.
Unfortunately, that's the reality of the situation. That being the case, it is in your best interest to keep everything operational with as minimal downtime or interruption as possible. There's a mixture of human expectation, perception and reality all mixed up here, but essentially this means that in order to be good at your job it helps to be lazy.

Characteristics of a lazy sysadmin

Backups

Lazy sysadmins will be anally retentive when it comes to backs. They will ensure that backups are not only run, but tested to ensure they actually work. Backups will be stored offsite and rotated regularly. Initial backups will be to disk and then flushed to tape. Backup agents will be purchased for every system possible to make granular restoration easier. Complete system backups will also be kept and refreshed every 3 to 6 months so that entire systems can be restored in minimal time. Trial runs will be conducted to familiarise the sysadmin with the process of disaster recovery.

Virtualisation

Lazy sysadmins will also virtualise every system they possibly can. Virtualised servers make life easier by streamlining tasks and removing the hardware dependency on servers. Snapshots will be made to enable easy rollback from upgrades and service pack applications (if required). Lazy sysadmins will also have snapshots stored on redundant hardware for DR purposes.

Clustering / High Availability

All mission critical server applications will be clustered with failover/failback capability. This will allow the sysadmin to sleep at night if a single server happens to fail. Lazy sysadmins recognises that a 3 (or 5) server cluster is the ideal approach as it allows for redundancy even if one server is down for maintenance.

UPS / Generator / Airconditioning

Lazy sysadmins will insist that all IT systems are protected by good quality server grade UPS that are either continually on or line interactive. The UPS will be managed, have remote sensors and produce regular environmental reports and issue alerts. They will configure their servers to shutdown gracefully on power failure or in unfavourable environmental conditions. They will push for backup generators for the UPS and for airconditioning stating that the UPS may run the equipment - but not the airconditioners. They will also push for computer room quality airconditioners - preferably redundant - and not settle for domestic grade split systems.

Hardware

Lazy sysadmins will ensure the IT equipment that is purchased is tier 1 quality (HP, Cisco, IBM, Dell etc) with capability for expansion and at least 60% overhead for current requirements. They will not settle for tier 2 or white box equipment.

Remote Access

Lazy sysadmins will ensure that as many tasks as possible can be conducted from home or on the road and where possible by phone or tablet. The time to connect should be as low as possible.

Monitoring

Lazy sysadmins will setup detailed, granular monitoring of all the equipment, servers and services in a hierarchical fashion. A dashboard will be available for overview with external monitoring and alerts sent by email or SMS depending upon the severity. The lazy sysadmin will regularly check the log files of their systems looking for inconsistencies that may lead to larger problems at a later date.

Self-Healing Systems

Lazy sysadmins will make sure all essential services are self-restartable. Scripts will be written to monitor and record the system configuration before and after service restart. Ideally this will be simply an extension of the capabilities of the monitoring system.

Security

Lazy sysadmins will never compromise on system security. They will establish secure firewalls, secure vpn, vlans, dmz access, email scanning, forward and reverse proxies, virus protection, enforced password security and apply multi-factor authentication where possible.

Patches and Updates

Lazy sysadmins will apply patches and software updates on a regular basis. Patches increase stability and security of your system. Updates extend functionality and reduce time when external support is required.

Documentation

Lazy sysadmins recognise they have a poor memory, so they make sure that all new systems are built three times: once to familiarise, once to document and the last time to test the build documentation. That way if and when it comes time to rebuild that system, they know the documentation is accurate. Lazy sysadmins also write their system documentation aspirationally: that is, the system is documented how they would like it to be rather than as a snapshot of its current condition. That way, over time the documentation becomes more accurate rather than less accurate.

Training

Lazy sysadmins recognise that the more people that know what they do, the less likely they will be called out after hours. They will train their juniors to know as much as they do and encourage them to learn more independently. They will encourage juniors to become mini-experts in the smaller systems and document their systems accordingly.

So, if you are sysadmin, make sure you are a good one by being as lazy as possible and following the tips listed above.

Monday 7 January 2013

Email status check

Okay, you're offsite and someone rings up to say the email system isn't working. Now, you KNOW that nine times out of ten the email system is working perfectly - it's just something the user is doing wrong. How can you quickly check to see if email is working without logging into the servers? Well, you could simply send an email from your gmail account to your work account and vice versa. That would be a good indication that everything is working, but if you don't get the email, it tells you absolutely nothing other than something might be wrong with the email system.

Most enterprise mail systems have a number of servers involved in the generation, transmission and reception of email. In generic terms we have the following elements:

MTA - Mail Transfer Agent
MDA - Mail Delivery Agent
MSA - Mail Submission Agent
MRA - Mail Retrieval Agent
MFA - Mail Filtering Agent
MUA - Mail Usage Agent

Many sysadmins may exclaim at this point "Hang on - I don't remember there being that many elements to email delivery!" and the reason for that is we are now in the post-Marid world of Internet based mail as explained by RFC5598 - 2009. Quite simply: Things are different now. If you are running a mail system that was setup before this time and not updated, chances are that you aren't RFC-compliant to the IETF standard. If you're running MS Exchange out-of-the-box, then you definitely aren't standards compliant. However, making your email system RFC-compliant is the stuff of another article...

RFC5598 divides the various agents into their respective areas of responsibility called "Responsible Actor Roles". These are:

 - User
 - Message Handling System (MHS)
 - ADministrative Management Domain (ADMD)

The traditional flow of email was:

MUA -> MTA -> .... -> MTA -> MUA

Now, the email flow is more commonly:

MUA -> MSA -> MTA -> ... -> MTA -> MFA -> MDA --> MRA --> MUA

where -> is a push operation and --> is a pull operation.

Obviously, in such a system there are a number of elements that can go wrong and be described as "the email system is down".

On email systems I administer, I usually create a dummy account called "Email-Check". At its most basic level, you set it up with an Out of Office reply that says "Email is working". However it doesn't end there. Each point in your message reception system can be setup to respond with diagnostics on each component. A fully working system will received replies from each component in the chain. In the second example, if you send your email to email-check@your-domain and receive a reply from the MTA and MFA, but not the MDA or the MRA, then you can reasonably assume the problem lies with the MDA - that should be the place you start looking.

Practical Examples

MailMarshal

1. Write a rule in MailMarshal that triggers when the to: address is email-check. Have the rule execute as an external command the file "mail-check.cmd" and pass the following parameters to it: servername@domain {ReplyTo} {SenderIP} {HelloName}

2. Write email-check as follows:

@echo off
c:
cd \scripts
echo Email check for [servername] > mmcheck.txt
echo. >> mmcheck.txt
echo.|time|grep current  >> mmcheck.txt
echo.>>mmcheck.txt
echo [Servername] Mail Marshal Service Information >>mmcheck.txt
echo. >>mmcheck.txt
start /wait msinfo32.exe /categories +SWEnvServices /report msinfo.txt
type msinfo.txt | grep MailMarshal >> mmcheck.txt
echo. >>mmcheck.txt
echo Sending IP  : %3 >>mmcheck.txt
echo Helo Name   : %4 >>mmcheck.txt

echo Sending Mail.
bmail -s 127.0.0.1 -t %2 -f %1 -h -a "MailMarshal Check [ServerName]" -m mmcheck.txt > sentmail.txt

Of course, you'll need to source the executables for grep.exe and bmail.exe or provide substitutes in order for this to work.

Postfix / Sendmail

If you are running a postfix or Sendmail, then this job can be done using a milter. A milter is generally written in C, Python or PERL. Personally, I prefer PERL. The way you write your script will depend on your actual setup. I plan on posting a postfix setup example sometime, I'll include a milter for email-check at that time.

Exchange

Unfortunately, dealing with actual messages in Exchange requires an MUA. I don't see any way around this except by setting one up to act on these messages. Technically, there's nothing to stop you running an Outlook client on an Exchange server with autologon (other than sheer common sense that is).

Groupwise

Being a full groupware system, there are a number of ways that Groupwise can respond and react to email messages at the server level. The easiest way is through the Groupwise API engine (GWAPI). The GWAPI can respond to the content in messages and trigger external scripts and is relatively simple to install and configure. The only downside is that ongoing development of the API has ceased since version 5 - so it will essentially run as an external system and only on a Netware server. The next easiest option is to write a Custom 3rd Party Object (C3PO), however that will essentially be an MUA that requires the Groupwise client to be installed. The elegant solution is to create a Trusted Application using the Groupwise TAPI that will directly access the message store.

Lotus Notes/Domino

Any decent Notes system will have at least one programmer managing the Notes/Domino infrastructure. Implementing a script to report on the status of the Domino system should be trivial.

Friday 4 January 2013

OS/2 Obituary

OS/2 version 1 was a dismal failure - that's really all I have to say about that. Version 2.0 had moderate success mainly due to Citrix Winview (the precursor to WinFrame and MetaFrame), however warp server (version 3 through to 4.51) was a spectacular OS.

IBM decided to collaborate with Microsoft in creating OS/2. The original idea was that Windows 3.x would be the desktop OS and OS/2 would be the server OS. As a result there was a fair amount of shared code between the two. At the time, Microsoft didn't have a server/network solution and IBM had Lan Manager. Microsoft also had a deal with Novell that allowed Windows to dovetail into Netware and use the IPX/SPX protocol. Novell and IBM also had their deal which allowed their stuff to interoperate as well. It was all really cozy: Microsoft owned the desktop, Novell owned the network, IBM owned the server. Everyone knew whose turf was whose.

Then a weird thing happened. Microsoft released the Windows 3.1 upgrade and sold 30 million copies in the first two months. That was on the back of 8 million in sales of Windows 3.0 over the previous two years! Microsoft crunched the numbers and decided to write their own server OS and networking system. They dumped the deals with Novell and IBM and decided to write their own server OS and networking protocols.

However the deal with IBM was set in stone. IBM had the rights to nearly all of the windows APIs and in turn, Microsoft owned about 30% of Warp Server. The divorce was a bitter one that (intentionally) delayed the release of Warp Server. But release it did. However another weird thing happened...

On the release date of Warp Server, Windows NT had more press coverage, advertising and editorial space devoted to it than warp server. In fact, nearly twice as much. At this time, NT wasn't even in alpha. It was vapourware! Over the weeks and months that followed, press coverage for warp server declined but NT coverage remained constant. Microsoft simply out marketed Warp Server.

The reality was that warp server was much more capable than even Windows NT 4.0 - which wasn't released until years later.

The irony was that Microsoft made more money per copy of OS/2 that IBM sold than it did from every copy of Windows NT they sold. Essentially, sales of OS/2 Warp Server funded the development of Windows NT.

To buy time, Microsoft release an update to Windows 3.1 called Windows for Workgroups 3.11. This had a very crude networking system called NetBEUI (NetBIOS Extended User Interface). Microsoft simply took NetBIOS (which came to them from the IBM deal) and instead of attaching it to a routable protocol such as IPX or IP, they simply sent it out as a raw broadcast. It was really horrible, but it worked. As a side issue, Novell engineers suddenly discovered that all the great interoperability between Windows and Netware had disappeared. Workarounds were established, but things would never been the same.

The other gain that MS had from IBM was the HPFS file system that IBM developed. MS made a few small changes and called it NTFS.

The deal with Novell and IBM held solid and Novell released a version of Netware that ran as an application on top of Warp Server. This meant that Novell sites (accounting for 87% of networks at the time) could run a single server for both application and networking. And because of the shared code, Windows apps could run on Warp Server. Netware for OS/2 ran with only a 5% overhead when compared to a bare-metal server.

Quite simply, OS/2 Warp Server was better, faster, cheaper and more capable than Windows NT ever was. At the time Windows NT didn't even exist as a product, yet Microsoft cut deals with large organisations and Governments worldwide to install Windows NT and not OS/2. In nearly every case these decisions were made without reference to the technical people in the organisation. Microsoft had worked out that as long as their people are playing golf with your boss, your opinion as an engineer is not going to count very much. IBM relied on the (now waning) adage that nobody ever got fired for buying IBM.

Yet many places DID buy and implement Warp Server. In some cases, it continues to be used. NCR ATMs still use OS/2 Warp Server as do ticketing machines, baggage handling systems, voice mail systems, telecommunications systems and satellite control systems. Warp Server particularly shines in environments where any latency is unacceptable such as real-time systems. OS/2 trained engineers describe Warp Server as "it just works"; meaning it doesn't crash, doesn't need to be restarted on regular basis, doesn't suffer from bottle necks or glitches and doesn't need to be restarted for updates. You install it and it runs for the next ten years.

IBM eventually gave up on Warp Server selling it to Serenity Systems in 2001 where it was renamed eComStation. The latest version is 2.1GA (General Availability) which was released in 2011. Sales are low and Serenity Systems allows you to download it for free. It will run virtualised in Oracle Virtual Box.

As a side irony, about ten years ago a company in Russia wanted to run Warp Server virtualised. Vmware  couldn't do the job at the time, so they hired some programmers and created a new company to write the virtualisation software. They named the company Parallels Inc.

There is a project called OSFree to recreate Warp Server as an Open Source OS.

Wednesday 2 January 2013

Wildcard email addresses in MailMarshal

A few years back, there used to be a free service that enabled you to generate unique email addresses that would redirect to a single email account. This was great for web forms that required a valid email address. You would generate an email address for that particular website and disable it if they started spamming you.

Well, like all great "free" services, it eventually became "non-free", so that was the end of that. However, with a little ingenuity, it is possible to get MailMarshal to do something similar. Here's how:

Grammar

Firstly you need to identify the specific grammar of your email addresses, develop a secondary grammar for the wildcard addresses and then make sure there are no "collisions" in the grammar. For example, most organisations have their email addresses conforming (more or less) to the following grammar:

<first_name>.<last_name>@domain_name

That being the case, you can then define your wildcard grammer to be as follows:

<first_name>.<last_name>.<wildcard>@domain_name

Create Wildcard Group

The second step is to create a MailMarshal group called "Email Wildcard" and place the user names of everyone who will be using a wildcard, plus an entry with .* for every user as well. For example

joe.bloggs@example.com
joe.bloggs.*@example.com

It is possible to dispense with this step, however the group requirement gives you more control.

Rule 1

Some preliminary work is required here:

Firstly, create an external command called "Echo Old address to Threadno File". This is necessary because Mailmarshal can only work with non-SMTP header fields. You can work with the TO: field, but not the RCPT TO: field, which (unfortunately) is where the real stuff happens. So, we need to directly modify the text of the email outside of MailMarshal.

The external command will have the following properties:

Command Line: cmd
Parameters: /c echo {Recipient}>"{Install}\Address{Threadno}.txt
Tick the "single thread" and "execute once only" boxes. Leave the rest unchanged.

All this command will do is write the actual recipient email address to the file. Left like this, it will do nothing. We need to modify it later.

Next we need to write the rule to look for the email wildcard. This is done using the following Header-Match rule that needs to be manually defined as follows:

Match against: TO: CC: BCC:
Field parsing method: email addresses
Field search expression: (.+)\.(.+)\.(.+)@domain_name

For informational purposes, create a message stamp to indicate the email is from a wildcard source:

-----
Wildcard email address - {Recipient}


Next, a header rewrite rule needs to be created called "Address Change" as follows:

Match Against: X-AddressChange:
Field Parsing Method: Entire Line
Insert If Missing: Address Change

This will add the field X-AddressChange to the header indicating the email address has been changed and set us up for Rule 2. The complete Rule 1 will look as follows:

Standard Rule: Email Wildcard deletion - Rule 1
When a message arrives
Where message is incoming
Where addressed to 'Email Wildcard'
Where message contains one or more headers 'Email Wildcard'
Run the external command 'Echo Old address to Threadno File'
    And write log message(s) with 'Email Wildcard'
    And stamp message with 'Email Wildcard Stamp'
    And rewrite message headers using 'Address Change'
And pass message to the next rule for processing.


Rule 2

This rule is a lot simpler, it simply looks for the X-AddressChange field and then rewrites the email address to remove the wildcard.

The Header Match rule needs to be defined to look for "X-AddressChange" with the search expression '.+'

The Header Rewrite rule will be as follows:

Matching Fields: To:, Envelope Recipient:
Field Parsing Method: Email addresses
Field Search Expression: ^(.+)\.(.+)\.(.+)@(.+)
Substitute into field using expression: $1\.$2@$4
Enable Field Changes: (ticked) 

The final rule will be as follows:


Standard Rule: Email Wildcard deletion - Rule 2
When a message arrives
Where message is incoming
Where message contains one or more headers 'X-AddressChange Exists'
Rewrite message headers using 'Email Wilcard Deletion'
And pass message to the next rule for processing.


Rule 1 will stamp the message so you will know the original address used. If you start receiving spam from the source, add it to your blacklist of recipients. For example, suppose you sign up to a site using joe.bloggs.website1@example.com as your email address and you start being spammed by them. Add the address to your recipient blacklist and the spam will stop, however your regular email will still be delivered.

It is possible to add a "Rule 1.5" to add the recipient to the subject line - that way you can sort your emails by subject line. The rule would be very similar to Rule 2.

This is just one example of how you can push the boundaries of what MailMarshal is capable of by using external commands.