Wednesday 17 July 2013

The day I met Linus Torvalds

There I was, minding my own business at the Google party after the third day of the 2007 Linux Conference Australia, sitting at a table by myself when I hear a voice say "Do you mind if we sit here?"

I look up and see Linus there with his wife Tove and reply "No worries, go right ahead."

What followed was some awkward silence, interspersed by the occasional exchange in Finnish between Linus and Tove. I valiantly attempted a conversation.

"Have you been doing much kernel maintenance lately?" (It sounded bad as soon as I uttered it)

"Oh no, not much. I try to leave that to others."

Try again "Will you be giving many lectures while you are here?"

His reply "Just my introduction to Andrew Tanenbaum's keynote address tomorrow. I really hate public speaking, I only do it when I have to."

I figured I may as well dive into the specific issues I had with Linux at the time.

"So, how do you feel Linux is going to handle enterprise deployment issues?"

Linus seemed genuinely surprised, as though he didn't understand the question - or simply the need for it. I tried to elaborate.

"The problem I have with convincing clients to use Linux is the lack of enterprise tools available - the ability to image workstations and servers, manage desktops, deploy applications and printers etc."

Linus scoffed at these requirements, but I pushed on "These things are the reasons why Linux has over 50% of the web server marker, but less than a 1% penetration into the corporate desktop."

Linus' answer was that if there was really a demand these tools, then there would be open source projects for them.

To me, that was cart before the horse stuff and I said so. Also the lack of a reliable network file sharing protocol (nfs is not up to snuff and neither is samba) to which he asked "Well, what do you use?"

"NCP - Netware Core Protocol" I replied.

"Netware!? That's a dead operating system. No one uses it anymore."

"That's not true, all of my clients use it. The Queensland Gov't uses it. In fact, the larger the organisation, the more likely they are to use it. It's strength is its scalability - which is far superior to Windows and currently lacking in Linux."

"When was the last time you installed a new Netware server?" He scoffed.

"This month: Two installations in fact. But anyway, NCP and eDirectory run on either Netware or SLES."

At that point he asked if I worked for Novell and when I said no, he still checked my conference badge. Once he confirmed I wasn't one of the Novell apparatchik he proceeded to rip on Netware and Novell in general. He had a couple of good points that I (and just about every other Netware engineer) agreed with - such as the increasing problem of driver support for closed source operating systems.

However, then it turned kinda personal - almost pityingly personal actually. In the world according to Linus: I was a dinosaur, a relic of a bygone era reminiscent of the last telegraph operator or a septuagenarian steam train driver. 

This wasn't at all what I expected. I'd hoped for a robust discussion; some thoughtful insights and maybe the occasional "Hmmm, interesting point." Instead my treasured discussion points may as well have been heretically nailed to a church door in the presence of Tomas de Torquemada. I think I would have had a fairer hearing.

I excused myself and tried to enjoy the party that Google threw for us geeks before Linus could burn me any further. I found it difficult to enjoy the festivities and left early for my conference domicile and pondered the conversation before I went to sleep. This is always a bad idea for me as I tend have strange dreams - really strange dreams that would delight any Jungian psychotherapist. Tonight was not going to be an exception.

I found myself at the Battle of Zama as Hannibal commanding the Carthaginian Army against the Roman forces under Scipio Africanus. I know this battle well and deeply despaired. Historically, this was Hannibal's last battle and a major victory for Scipio. I looked at the Carthaginian Army and noticed they were all wearing red with the Novell logo. I rode forward to parlay with Scipio who was bearing the banner of a penguin. When Scipio removed his helm - there was Linus' face staring back at me with that same pitying smile he gave me at the party.

"We seek terms for surrender." I said

"There will be no surrender. After today Carthage will be only a memory." He replied without changing his facial expression. Linus replaced his helm and we both reformed the lines.

"Your orders?" asked my trusted aide. I thought for a while before replying.

"Take out your sword and cut off my head."

And then without any hesitation, he complied.

Monday 15 July 2013

How to know if you are a bad sysadmin

Just about every sysadmin I have met have one thing in common: they all think they are awesome at their job. However, rarely (well IMHO anyway) is this correct. Most will be quite offended if I have something to say to them about their ineptness. So I have created a short self-evaluation questionnaire that you can use to find out just how bad (or good) you are at your job. If you are a "good" sysadmin, you should score very low on this questionnaire and at least recognise the issues that exist. If you don't understand the reasons for these questions: It's time to either take steps to remedy these issues or do the rest of us a favour and leave the industry forever.

1. Do I ever have to ask my users for their passwords?

No sysadmin should ever need to know a users password. You should have procedures in place and methodologies and/or technologies to ensure this is not necessary - ever.

A corollary to this is that password resets should be immediately be followed by a forced password expiration. All end-user passwords should be rotated at regular intervals with duplicates not allowed.

2. Do I ever use the enterprise Administrator password?

The administrator or root password should never been used except on standalone systems. All administrators should have their own administrator password separate from their usual login. An extension to this is that all administrative activity should be logged.

3. Do I physically have to go to a users workstation?
For anything other than doing physical work, this should be unnecessary. You should actually have more capability through remote access than sitting at their desk.

4. Do I never conduct trial restores from Backup?

Just because your backup software says "successful backup" it does not necessarily follow you will be able to restore data from it. Check regularly, so you become familiar with the process. At least once every six months do a complete trial disaster recovery for one of your servers. Time yourself, try to beat that time.

5. Do I have to manually setup workstations for new users?

A new user should need only login with their password to get:

 - All their software
 - Their drive mappings
 - Their printers

There is simply no need for a sysadmin to get involved in this process. It should all be automated.

6. Do I use statically mapped drives?

This should never be required. Scripting should take care of all contingencies.

7. Do I use user-based file system rights/permissions?

These are close to impossible to administer. If you have user-based permissions for anything beyond services and home directories, chances are your file system security is non-existent.

8. Do I allow direct access to the Internet?

This is a serious security issue. Access to the internet for ANY protocol should be via the DMZ using proxied or relayed access. No exceptions.

9. Do I use 'Same as xxx user' when creating new user accounts?

This is not only lazy it is insecure and leads to non-repeatable actions. The new accounts will usually have way too many permissions - many of which you will be unable to explain the reason for.

10. Am I unable to name any new technology that I have trialled in the last six months?

Good sysadmins spend significant time on system development. This includes trialling new technologies as they are released to determine the relevance in your environment.

Enterprise File System Security

I'd have to say that very few (read 'two') organisations I've encountered implement good file system security. This is a pity because file system security is one of the most basic and easy things to get right. Many sysadmins, however, implement security for their systems on a user basis - which makes administration a function of the number of users you have.

Some time ago I established a reasonably bullet-proof filesystem security template that I have applied across NFS, CIFS/SMB and NCP file systems. The required exceptions are rare and (usually) easily managed. It does require management buy-in so you need to establish the business case very well (greater security, better administration etc). You will also need to lock-down the new user and modify user processes tightly which will require cooperation with HR. Coordination with departmental heads to ensure they have the ability to do their work unhindered by the file system security is essential.

General Notes

Never use masks or revoke permissions. The number of cases where this needs to be done is so rare it is almost non-existent. If permissions need to be revoked, simply apply zero permissions to the appropriate user or group. Trying to track down complex permissions with revocation is a nightmare - so don't do it.

With the exception of home directories, user specific directories, administrative accounts or services: Never, ever apply permissions to users. Create a group - even if that group has only one member - and apply rights/permissions to the group.

For Windows servers, apply all security via NTFS permissions and not via the shared permissions. You'll thank me for this one someday...

Also for Windows, be careful of the ACLs. There is no true inheritance in NTFS, so if you copy or move a directory structure, make sure you re-apply the permissions.

Get a handle on how the FS security works for your system. Know it backwards.

Home Directories

This is fairly simple. Only users have rights to their home directory with one possible exception (see submit). Make sure the users don't have the ability to add others to their directory or sub-directory. For Windows sysadmins, this means giving users 'Modify' instead of 'Full' permissions.

Departmental Directories

In general, you will want all members of the department to have R/O access to this tree. A group will need to be created for those that have write access. Beyond this, apply extra permissions at the lowest level only and create groups for those with extra access. Use a naming convention that allows you to know exactly by looking at the group what it is used for.

Shared Directory

These will generally a single tree with subdirectories for each department. Members of that department will have R/W access and others will have R/O access.

Temp Directory

This is to facilitate the temporary transfer of files. All users have R/W access and the directory has size restrictions on it. The entire structure is deleted every weekend after backup.

Software Directory

A R/O structure containing installable software and other R/O files. This structure is rarely backed up.

Submit Directory (Optional)

This requires some clever scripting. Essentially, there is a subdirectory for every user. Everyone has W/O access to these directories. They can copy files there, but not see or read the contents. Once copied, a script runs which moves the submitted files to the users Home Directory/submit and an email is sent notifying the user of the file, its location and who copied it there.

Additional FS Security

For anything not covered, create a group or groups with the necessary permissions. Make sure your naming conventions makes this fairly self-explanatory. Ensure that these group memberships are part of the user account creation process, which should NEVER have the statement 'Same as xxx' - this is simply bad administration.

Administration

Once you have settled upon the appropriate rights/permissions, tested them etc. Dump them to a text file and check them carefully. With the exception of home directories, write a script which will re-apply these permissions, deleting any others and have it run every night. When administering, modify the text file (or create a new one). By doing this, if someone with administrator access fools around and adds permissions (or removes them), all will be put back to rights overnight. As an extension, dump the rights/permissions before modification and compare them using diff with the original. Send an email alert out if there is a difference.

Followed properly, all you should need to do to alter a users permissions is to change their group membership. The number of times you will actually need to add more permissions will be small and now you can make them subject to a change request rather than a service request.

Enforcement

Executables should not exist in home directories or departmental directories. It is a simple matter to write a script to quarantine executable files (or make them non-executable) on a daily basis. If executables need to be shared, a change request should be submitted to the IT Department.

This is just a small but crucial part of best practice administration. It is simply better to have a total handle on your file system security.

Monday 20 May 2013

Printing to Sharp multifunction printers with user codes

The brief was fairly simple: require departmental codes when copying from or printing to a number of Sharp MX4111N and MX5111N multifunction printers. Simple? Well, almost...

Part of the brief was also that users shouldn't have to type a code in when printing. The workstation must magically know which department and code to use for each user. From my perspective, it's also handy to be able to print from Linux. This blog covers two parts: the Windows side and the Linux side.

Printing from Windows

The windows driver for these printers is very nice. Since the site for implementation was a Windows site, a Windows print server was setup - that much was pretty basic. There were spare licences for Windows Server 2003 R2, so that was used. I also setup IIS with printer integration so the printers could be browsed, monitored and added manually from within ie. Again all very basic. Now the question of the printer codes.

My original thought was to distribute the codes at driver installation time, however the problem with that is it won't work because the codes are encrypted within the registry. The only real way to do it is via a registry hack after determining what the hash value really is.

The first step was to determine the hash values. For this, I cheated. I manually entered the code in the driver, saved, then copied the DWORD key and value from the registry into a text editor.

Now, I hate hacks in production, so I caught upon the idea of using an adm template and setting the user codes within group policy. It would be very neat and pretty. I spent about two days researching adm templates (not my strong suit) and built an adm template for one printer with two test codes and tried it out.

Guess what? It didn't work. Nothing. Nada. Zip.

This is the adm template I built:

 CLASS USER
CATEGORY Sharp_Printer_Codes_(CustomADM)
  POLICY User_Code
  EXPLAIN !!SharpCodeHelp
  KEYNAME Software\SHARP\smvps01\pmvad-L1\printer_ui\job_control
    PART !!User_Code DROPDOWNLIST REQUIRED
    VALUENAME "account_number"
    ITEMLIST
      NAME !!None VALUE "00,00,00,00" DEFAULT
      NAME !!Audio VALUE "b1,d0,7d,34,d7,b0,82,6c,44,53,ad,a5,1d,01,58,45,00,00,00"
      NAME !!IT VALUE "50,9d,67,fa,39,8b,9b,17,10,5c,c9,ac,a7,ac,98,50,00,00,00"
    END ITEMLIST
    END PART
  END POLICY
END CATEGORY

[strings]
SharpCopierCodesCustomADM="User Codes for Sharp Copiers"
User_Code="Set the Sharp Copier User Code to: "
None="No Code"
Audio="Audio - 32147"
IT="IT - 25896"

; explains
SharpCodeHelp="You can set the User Code for the Sharp Multifunction Copier from here. Users can still change it but it will revert when group policy refreshes. This ADM was created by Wayne Doust."


After much searching I found out why it didn't work. DWORD values are not supported in adm templates!!! Why? I don't know. To me this is a major drawback to using adm templates for anything for apparently no justifiable reason. What this means is the only solution is to use registry hacks.

To make it more elegant, I created two sets of groups. One set of groups was the "Print-dist" groups. For every geographical location of printers I created one. Mostly this was a one-one relationship between the group and the printer, sometimes two or three.

The second set of groups was the "Print-code" group which contained all of the users using a particular printer code to print.

Where possible, the groups were hierarchical: For example, the pc-audio group has the audio department as its one and only member.

Thirdly, I created the registry hacks for each printer code as follows:

Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\Software\SHARP\smvps01\pmvad-L1\printer_ui\job_control]
"use_account_number"=dword:00000001
"set_login_name"=dword:00000000
"set_login_pass"=dword:00000000
"login_name"=""
"login_pass"=hex:00
"account_number"=hex:50,9d,67,fa,39,8b,9b,17,10,5c,c9,ac,a7,ac,98,50,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\
  00,00,00,00,00,00,00,00,00,00,00,00
"use_PIN"=dword:00000000
"folder_index"=dword:00000000
"use_user_name"=dword:00000000
"use_job_name"=dword:00000000
"user_name"=""
"job_name"=""
"pin"=hex:00
"folder_pass"=hex:00


The hack was repeated for each and every printer (just in case). The registry hack is initiated from the login script, however you can use a user group policy if that floats your boat. In my case, I used kickstart scripts as follows:

 ? "Setting up Print Codes"
IF Ingroup ("PrintCode-Audio")
    ? "Set Audio Print Code"
    run "regedit /s \\<domain>\netlogon\sharp\pc-audio.reg"


Distributing the printers is done via pushprinterconnections.exe. The print-dist groups are matched on a one-one relationship to group policy objects with the same name by adding the group to the "Security Filtering" section of the GPO. From within the Print Management tool on the Print Server, select the printer and "Deploy with Group Policy" and choose the appropriate print-dist policy as shown below:


The final step in this process is to run "pushprinterconnections.exe" either from the login script or in Group Policy (again according to preference). If you add the switch "-log" it will log the results of the printer distribution to the local workstation. These can also be harvested (if desired) to determine the success or failure rate of printer driver installations.

Although it is preferable to install the printer before pushing out the registry hack, it isn't necessary to do so.

Printing from Linux

I'm quite impressed with the Sharp driver for Linux. It offers very similar functionality to the Windoze driver except it lacks the bi-directional communications component - which means you have to manually tell the driver what optional features the printer has. It also annoyingly misses one important feature - the ability to use user codes!

After a lot of searching, I found a hack for it. Credit to nipquad and eric71 for pointing me in the right direction. You need to edit the supplied ppd file BEFORE installing the driver (you can do it later, but that's a little messier). The original examples I worked from were in Spanish, which I left unchanged just in case there was some dependency in the spelling. It looks like there isn't, so you should be okay to convert back to English. However, I found searching for 'numero' rather than 'number' was invaluable for debugging. :-)

Edit the ppd file for the printer you want to install (located in a tarball) and find the section labelled "Constraints" and add the following immediately before that section:

*% ====================================================================
*% Account number
*JCLOpenUI *JCLMXaccount/numero: PickOne
*OrderDependency: 80 JCLSetup *JCLMXaccount
*DefaultJCLMXaccount: A25896
*JCLMXaccount A25896/25896: "@PJL SET ACCOUNTNUMBER=<22>25896<22><0A>"
*JCLCloseUI: *JCLMXaccount
*% ====================================================================

In my case the default number is "25896". The number you put here doesn't matter that much (as long as it works); once the printer is installed, the number can be changed.

Save the file back into the tarball. If you intend to use more than one model of printer, modify all that you will use. In Ubuntu, the ppd files will be copied on driver installation to /usr/share/cups/model/sharp/en, however ymmv.

Install the driver using "sudo ./mx-c26-ps.install", it will spawn the following GUI installer:


 Select the driver to install and continue:


Next, use CUPS to search for and add the printer:


Once Added, you can edit the properties:


If all is working, you will have your "numero" field. Do a test print to check the code is working.


PPD files are reasonably open-ended. There's not much to stop you from adding just about any feature this way - as long as the printer supports it.


Wednesday 17 April 2013

The UNIVAC 1100

In what seems like another lifetime, I learnt to program in Fortran on an ageing UNIVAC 1100 model 70 at the University of Wollongong whilst studying for my Bachelor of Computer Engineering degree. The UNIVAC had 1 Megaword of RAM (quite a lot back then) and filled several rooms in the computing centre that few people every actually went into. You couldn't actually interface directly with the mainframe. You had the choice of submitting your programs to be run as batches via punch cards or tape, or you could use one of the flickering monochrome text-only terminals scattered about the campus. These terminals connected via 9600bps serial RS232 connections to one of three Perkin-Elmer mini computers operating as terminal servers.

It could easily be argued that the Perkin-Elmers were nearly as capable as the Univac, however the Univac had oodles more storage thanks to its offline tape drives. These operated by migrating data from the expensive washing machine sized 20MB Hard Disk Drives to tape which would be removed by the operating and placed into tape storage. Files unused for 30 days were automatically moved offline files and marked with an asterisk on your file list. If you wanted the file, there was a command to request it and the operator would receive a message to load tape XYZ. He would scurry off, find the tape, mount it and your file would be restored like magic. Once restored you received a message your file was available. Naturally, it was a source of amusement for students to save meaningless files, let them age offline and then restore them one at a time just to keep the operator nice and busy. Sometimes we would coordinate our activities to really keep him busy.

The UNIVAC was a very important mainframe back in its day. The University rented time to the local Government and external companies including BHP. As such (and also due to its temperamental nature) there was an operator/programmer on duty 24x7 in shifts of three. It goes without saying that the dog watch (midnight to 8am) was the most boring shift to work and the one generally worked by the most junior person. There was an Urban Legend floating around at the time that one of the operator/programmers was bumped from afternoon shift to dogwatch and was less than impressed with the arrangement. Bored easily and since the only excitement that ever happened during dogwatch was when something broke, he decided to try and break something with software. He wrote a program to seek the HDD heads at varying frequencies attempting to find its resonant frequency. He found it, and sent the washing machine size HDD vibrating and bucking for several hours to see if it would crash. It didn't, however he found out that at frequencies around the resonant frequency it would 'move' in different directions. Working out the particular frequencies required to move the HDD unit in different directions, he wrote a program to link those frequencies to directional keys on the console keyboard. He could then run this program and move the HDD unit around the computer room (the HDD was connected via a long cable bundle). After a few nights amusing himself with his HDD 'robot' and failing to cause a crash, he gave up on it. However, a few weeks later, something really broke which required calling in an external engineer at around 3am. Then engineer arrived onsite more than a little peeved at being called out so early and made a few derogatory comments to the operator about his skills and intelligence, so after the engineer had finished his work, the operator loaded up his little program and had the HDD 'attack' the engineer. When the day shift arrived they found the engineer had locked himself in the tape storage room sobbing hysterically. When they asked the operator what had happened he replied "I don't know. He started screaming that the HDD was attacking him."

Nearly three hundred terminals around campus existed in three clusters (corresponding to the three Perkin-Elmers). There was a cluster for the Library, Science and Mathematics. Engineers had to use one of these, however usually they were full. We complained about this and a small cluster of thirty terminals and four printers was made available for engineers. A old WOLF front-end terminal server was rolled out to manage this cluster. The WOLF was so old its control program was loaded via paper tape. In appearance it was a two foot square cube with blinking lights on one side and a pile of 25 pin RS232 ports on the other. Normally this would be placed in a secure location, however it was placed in the same room as the terminals plonked right next to one of the terminals! There was also a clearly labeled 'Control Port' directly above the other RS232 ports. Conveniently, there was also a 'programmers manual' in a pocket on the back of the WOLF. Seriously? What were these guys thinking putting this combination in the hands of engineering students? This is like leaving toddlers unsupervised in a toy store. All we had to do was sit down at the terminal next to the WOLF, unplug the terminal server port corresponding to the terminal number, plug it into the 'control port', pick up the programming manual and... play! A side effect was that it was a great way to 'reserve' a terminal - you simply left it plugged into the control terminal and others would just think it wasn't working as they couldn't log in. Those of us in the know would just sit down and plug the terminal into the correct port.

At first it was just harmless pranks. Make messages pop up on terminals telling people their account was suspended or shutting down the terminal where some young fickle thing of rare feminine beauty was sitting and then offer your assistance to help them fix the problem, exchange phone numbers etc. Soon, however, we learnt you could do SO much more. The WOLF did a lot more than act as a simple terminal server. It handled security flags for the UNIVAC. In essence, the WOLF asked the UNIVAC what security a user had and then translated that to terminal numbers and then informed the UNIVAC what security flags the terminal had as a result - the UNIVAC trusted the WOLF implicitly. It was a simple matter to reset terminal based security flags to whatever you wanted. It didn't stop there, however. You could reset these flags for ANY terminal - not just those handled by the WOLF! The only terminal you couldn't do this for was the control terminal in the computer room used by the operator/programmer on duty.

For amusement, you could stroll into the cluster an hour before an assignment was due, sit down at the control terminal and type ']B' which would shutdown the entire cluster accompanied with a chorus of thirty screams. After the mass exodus took place with people rushing to other clusters in the vain hope of finding a free terminal there, you could then type '320R' which would bring the cluster back on-line.

There was one real use we put our WOLF knowledge to and that was the annoying sixty second timeout for terminal use. If there was no keypress in sixty seconds, the WOLF would log you out and you would lose whatever you were working on. Sometimes (like just before assignments were due) the UNIVAC would be so busy it would take longer than sixty seconds to respond to a line of input. The 'UNIVAC twitch' was a developed impulse to hit the spacebar at around fifty second intervals to stop this from happening. However, with a simple modification to the control program of the WOLF, this was no longer necessary.

There were several members of our group that played around with the WOLF to see what it could do and we would compare notes and share 'war stories' of things we had done with it. One member of the group found a way to elevate the permissions of a program running on the UNIVAC to do the same thing - it was quite an achievement. We reported our findings to the computer centre and they acknowledged the issue and essentially dismissed us for the simple reason than it would only ever affect students and the control terminal would never be affected; it could shutdown any offending program and then reset the security flags.

One member of our group (who shall remain anonymous) decided to test this premise and exploit the fact that the control terminal could only shutdown one program at a time. He created two programs named RHOOD and FTUCK (UNIVAC filenames were limited to five characters). Both programmes essentially did the same thing and each program contained the code for the other. Upon executing, RHOOD would reset all security flags for everything everywhere to zero - meaning nothing would work except the control terminal and then check for the existence of FTUCK. If FTUCK was not running, it would send the message "Alas Friar Tuck, thou art assailed. I come to thine aid!" and respawn FTUCK. FTUCK in turn, would do the same thing for RHOOD. He wrote the program under a 'borrowed' account and ran it.

It was like turning out a light. The UNIVAC was down for a week. 

The following Monday we came in to find the UNIVAC was up and running. A team had worked the whole weekend on an orderly shutdown and restart - something that had never been down since the UNIVAC had been commissioned ten years earlier. Nobody knew the process and no one was sure if it would ever have come back to 'life' again properly. Fortunately, it did.

We had a visit from the manager of the computer centre to one of our lectures to make a general announcement. It just happened to be a subject that every one in our group took (except one). He explained what had happened and how with a little too much detail for us to be comfortable with, the he said "We know the UNIVAC security isn't good. By exploiting it, you aren't proving very much. Now we have a pretty good idea of who you are. If we ever can prove it, the the least we can do is expel you." and with that he left. Our activities after this were much soberer than before - with one minor exception...

Most terminal clusters had printers attached to them, however the quality of the output on these 9-pin dot matrix printers left much to be desired. The get really good quality output, you sent the print job to the high quality (for the day) line printer in the computer centre. This central printer was hidden away and would merrily print jobs with the banner of the owner of the job as the first page. You sent the print jobs in and picked up your jobs the next day from a pigeon hole corresponding to the last two digits of your user id. My id was 8515682 - so my printouts would be in pigeon hole marked 80-84.

The line printer worked on a simple principle: It had a fast rotating platen with every printable character on it in order from ABC on down. Behind the platen was a row of 132 separate hammers that would strike the platen at the exact moment the correct character passed in front of it. The hammers would strike in seemingly random order until the entire line had been printer - so it appeared as though the printer produced a line at a time. This all happened very quickly with a peculiar by product: the printer sounded musical. Hitting a tightly stretched metal platen with a hammer would produce a musical tone with the pitch according to which hammer or combinations of hammers had fired at that precise moment. Somebody, somewhere had produced a document that mapped musical notes to characters printed on the line printer. One of the members of our group wrote a surprisingly short fortran program to take musical notation and send the text output to the printer to play music. He chose the 1812 overture which has explosions and cymbal clashes it it. For the symbol clashes he substitute a series of form feeds (these were so fast the line printer sounded like it was screaming) and for the explosions he substituting a line of 'ABCDEFG....' which meant all 132 hammers would fire at once with a huge 'bang'. Then without too much thinking, he ran the program.

Next day when he went in to pick up the printout, there was a single torn page in the pigeon hole with a note from the computer centre manager reading 'Please see me'. To say the least, the manager was not amused and presented him with four boxes of printout (each box containing about 2000 pages). By chance, three more of us were waiting outside to see the outcome of the meeting, so we each carried a box of printer paper.

Even in 1985, the Univac was getting on and needed to be replaced by a Unix system. Professor Reinfelds and his grad students developed a method for porting the Perkin-Elmer minicomputers to Unix. This was such a dramatic breakthrough that he formed the Wollongong group and went private, taking his methodology (and grad students) with him for greener pa$ture$. However, the University was left with three perfectly functioning Unix based minicomputers.

The non-portable operations of the Univac were migrated to a Univac emulation on a UNISYS 2200 minicomputer. Academics, staff and students were moved to the three Perkin-Elmers and a couple of Pyramind Nile 9000. Sometime in 1989, the Univac was switched off for the last time, the constituent parts sold for scrap and the computing centre (now renamed the Information Technology Centre) gained several new offices from the vast area occupied by the Univac.

Thursday 24 January 2013

Installing Ubuntu 12.04 LTS on a HP 6560b notebook

HP notebooks seem to be hostile to Linux for some reason. From what I can gather, some of the HP utilities write data to track 0 of the boot HDD. As far as Windows is concerned, as long as sector 0 has been spared it doesn't care what you write there. Linux, however uses GRUB (GRand Unified Bootloader) which only has its stage 1 loader located in sector 0. Applications (other than boot loaders) aren't supposed to write to track 0, however this is the stuff of another article.

The challenge then is to make the HP notebook dual bootable; make sure as many devices function under Linux as possible and (preferably) virtualise the Windows partition within Linux so that it doesn't become an either/or choice.

Partitions

The first hurdle is the extra partitions that HP create: the HP_Recovery drive and the HP_Tools drive. HP also has a bootable "boot" partition, meaning a total of four primary partitions! You can only have four primary partitions, which means that even blowing away one partition, the fourth would have to be an extended partition. So, one or both of these partitions have to go. Copy the files on the HP_Tools partitions to C:\HP_Tools and blow it away. You gain an immediate 5GB of space there.

Choosing to lose the HP_Recovery partition is a little more difficult. However the gains are worth it. You get back 15.3GB of disk space plus you regain continuity of the file system.

My preferred partitioning setup for a dual boot system is:

P1: NTFS (Windows)
P2: ext4 (Linux boot partition - /boot)
P3: Extended Partition
P4: Unused
EP5: FAT32
EP6: Linux swap partition
EP7: Linux LVM2 partition

The LVM partition is then allocated accordingly to the following mount points:

/ - unlimited
/home - unlimited
/var - limited
/tmp - limited
/sys - limited

This is fairly convoluted, but it fits my style of thinking. If you want to create a single root partition for everything then go for it.

In practice, I have setup the partitions as follows:


P1: NTFS (Boot)
P2: NTFS (Windows)
P3: Extended Partition
P4: ext4 (/)
EP5: Linux Swap



Not ideal and no LVM, however it does put the swap file at the end of the disk and still allows me to dual-boot. I decided not to use LVM because Ubuntu does not offer it as a native option (unlike CentOS and SuSe) and this is a notebook and not a server - I should be able to manage a contiguous file system on it. If I need more space I can always blow away the two NTFS partitions and use that space for /data.

Ubuntu Setup

After partitioning, the setup continues. I run an update and begin installing the additional packages from the software centre. This is a breeze. I find it amazing that Linux has gone from making it difficult to install software to being a complete breeze. Connecting to Ubuntu One cloud and all the files from my previous notebook are restored to this one. I also setup my login for Dropbox and synchronise with conduit. Setup time is quite quick.



I struggle with the Unity interface for a while before switching to Gnome with the Gnome Panel instead of the Gnome Shell. This makes it easy for me to enable compiz for a (real) 3D desktop. My real gripe with 12.04 (and Gnome 3.x and Unity) is that so many many thinks that used to "just work" are now broken. Some of these could probably be fixed easily if the packages were properly maintained and ported to GTK3. I can see why Canonical decided to pursue the Unity interface - it makes a lot of sense in light of the insane direction the Gnome project is going. Other distros have tried to keep Gnome and provide their own customisations: Linux Mint replaces the shell with the Cinnamon interface.

One of the packages I struggled with is nanny. It seems that GTK3 really breaks this app. It is listed in the 12.04 software respository, however it fails to appear on the Unity dock. This was one of my reasons for moving to Gnome, however even that didn't fix things fully. There is a PPA listed privately to "fix" nanny, however although it allows the nanny-admin-console to run, you cannot make any changes. Ubuntu need to remove nanny from the software centre.

The other struggle I had was with virtualbox. After installing it I realised I hadn't enabled VTx in the BIOS. However, even after making the changes I couldn't run a 64 bit virtualised OS. I installed Vmware workstation and had no problem with it. Since virtualbox bolts into the kernel, I uninstalled virtualbox and re-installed. This time I had no problem with virtualised 64 bit.

Virtualising Windows 7

I tried a variety of methods of p2v'ing the Windows 7 partitions. Most of the methods I found were based on Windows XP, however I also suspect that installing Linux first might have compromised my efforts. Success was achieved by installing the latest version of vmware converter on Windows 7 and running it in standalone mode, but creating a vmdk for vmware workstation 8 on an external HDD. I was then able to create a virtualbox machine that will run the p2v'd workstation. Here's the full procedure:

1) Boot Windows 7. Download vmware converter and install on the Windows 7 machine to be converted.

2) Run the converter and create the vmdk on an external HDD. In my case 64GB was required and it took several hours. Make sure that as part of the conversion process you disable all hardware services - particularly the HP services. Also change the controller emulation to LSI SCSI. Note that if the external drive is FAT32, it will divide the vmdk into chunks.

3) Boot to Linux. Create a new guest OS in virtualbox and connect to the vmdk on the external drive, changing from SATA to SCSI. Edit the settings to make the RAM at least 1024MB. Enable PAE/NX, VT-x and IO APIC. Change the display settings to 128MB of VRAM and enable 3D and 2D acceleration. Change the network adapter from NAT to bridged.

4) Start the Guest OS, allow it to install all the drivers and then reboot. Install guest additions and boot again.

I've tested the vmdk running in both vmware workstation and virtualbox - both work fine. If you don't plan on using virtualbox you can create the vmdk for version 9. I turn off all of the unnecessary stuff in WIn7 to leave a vanilla shell running in 800x600 mode. Since I plan on virtualising the apps, the desktop is unnecessary.


The choice between vmware workstation and virtualbox is a difficult one. Virtualbox is free, but with vmware workstation you can virtualise the applications on the Linux desktop as though they were just another app. The advantages of this are too great to simply ignore. I've also found in testing the two (on the same vm) that vmware workstation is much less memory hungry - only taking the RAM it currently needs. That doesn't seem to be the case with virtualbox as the above trace shows.

Crossover

The last thing to install is crossover. This is the only commercial application I own. It is simply invaluable if you want to run a Windows app on a Linux desktop without emulation. I use this mainly to run Visio - an app for which there is no real competitor.

Conclusion

I now have my workstation working pretty much how I'd like it to be. I can do everything with it now that I used to plus I have access to Windows 7 whenever I need it. The next few weeks should bed down the installation.

Tuesday 15 January 2013

How I spent my day (Old Blog)

This article is one I wrote nearly ten years ago for my old blog. It was originally written in three parts and explains the origin of may adage "I'd rather work on a ten minute job than a five minute one. A ten minute jobs only takes ten minutes to complete. A five minute job takes a least two hours."

It's interesting to note how terminology and technology have changed in ten years. Flash drives were commonly referred to as "pen drives" and dial-up modems are almost unheard of now. Most of the issues discussed with installing Linux are now non existent - back then you really had to know what you were doing to work with Linux, now Linux is so easy even an MCSE can work with it.

Tuesday 8 January 2013

A lazy sysadmin is a good sysadmin

As the sysadmin, it is your job to keep the IT systems running smoothly. If everything is running, they already know. If it isn't, they aren't interested in your petty excuses.
Unfortunately, that's the reality of the situation. That being the case, it is in your best interest to keep everything operational with as minimal downtime or interruption as possible. There's a mixture of human expectation, perception and reality all mixed up here, but essentially this means that in order to be good at your job it helps to be lazy.

Characteristics of a lazy sysadmin

Backups

Lazy sysadmins will be anally retentive when it comes to backs. They will ensure that backups are not only run, but tested to ensure they actually work. Backups will be stored offsite and rotated regularly. Initial backups will be to disk and then flushed to tape. Backup agents will be purchased for every system possible to make granular restoration easier. Complete system backups will also be kept and refreshed every 3 to 6 months so that entire systems can be restored in minimal time. Trial runs will be conducted to familiarise the sysadmin with the process of disaster recovery.

Virtualisation

Lazy sysadmins will also virtualise every system they possibly can. Virtualised servers make life easier by streamlining tasks and removing the hardware dependency on servers. Snapshots will be made to enable easy rollback from upgrades and service pack applications (if required). Lazy sysadmins will also have snapshots stored on redundant hardware for DR purposes.

Clustering / High Availability

All mission critical server applications will be clustered with failover/failback capability. This will allow the sysadmin to sleep at night if a single server happens to fail. Lazy sysadmins recognises that a 3 (or 5) server cluster is the ideal approach as it allows for redundancy even if one server is down for maintenance.

UPS / Generator / Airconditioning

Lazy sysadmins will insist that all IT systems are protected by good quality server grade UPS that are either continually on or line interactive. The UPS will be managed, have remote sensors and produce regular environmental reports and issue alerts. They will configure their servers to shutdown gracefully on power failure or in unfavourable environmental conditions. They will push for backup generators for the UPS and for airconditioning stating that the UPS may run the equipment - but not the airconditioners. They will also push for computer room quality airconditioners - preferably redundant - and not settle for domestic grade split systems.

Hardware

Lazy sysadmins will ensure the IT equipment that is purchased is tier 1 quality (HP, Cisco, IBM, Dell etc) with capability for expansion and at least 60% overhead for current requirements. They will not settle for tier 2 or white box equipment.

Remote Access

Lazy sysadmins will ensure that as many tasks as possible can be conducted from home or on the road and where possible by phone or tablet. The time to connect should be as low as possible.

Monitoring

Lazy sysadmins will setup detailed, granular monitoring of all the equipment, servers and services in a hierarchical fashion. A dashboard will be available for overview with external monitoring and alerts sent by email or SMS depending upon the severity. The lazy sysadmin will regularly check the log files of their systems looking for inconsistencies that may lead to larger problems at a later date.

Self-Healing Systems

Lazy sysadmins will make sure all essential services are self-restartable. Scripts will be written to monitor and record the system configuration before and after service restart. Ideally this will be simply an extension of the capabilities of the monitoring system.

Security

Lazy sysadmins will never compromise on system security. They will establish secure firewalls, secure vpn, vlans, dmz access, email scanning, forward and reverse proxies, virus protection, enforced password security and apply multi-factor authentication where possible.

Patches and Updates

Lazy sysadmins will apply patches and software updates on a regular basis. Patches increase stability and security of your system. Updates extend functionality and reduce time when external support is required.

Documentation

Lazy sysadmins recognise they have a poor memory, so they make sure that all new systems are built three times: once to familiarise, once to document and the last time to test the build documentation. That way if and when it comes time to rebuild that system, they know the documentation is accurate. Lazy sysadmins also write their system documentation aspirationally: that is, the system is documented how they would like it to be rather than as a snapshot of its current condition. That way, over time the documentation becomes more accurate rather than less accurate.

Training

Lazy sysadmins recognise that the more people that know what they do, the less likely they will be called out after hours. They will train their juniors to know as much as they do and encourage them to learn more independently. They will encourage juniors to become mini-experts in the smaller systems and document their systems accordingly.

So, if you are sysadmin, make sure you are a good one by being as lazy as possible and following the tips listed above.

Monday 7 January 2013

Email status check

Okay, you're offsite and someone rings up to say the email system isn't working. Now, you KNOW that nine times out of ten the email system is working perfectly - it's just something the user is doing wrong. How can you quickly check to see if email is working without logging into the servers? Well, you could simply send an email from your gmail account to your work account and vice versa. That would be a good indication that everything is working, but if you don't get the email, it tells you absolutely nothing other than something might be wrong with the email system.

Most enterprise mail systems have a number of servers involved in the generation, transmission and reception of email. In generic terms we have the following elements:

MTA - Mail Transfer Agent
MDA - Mail Delivery Agent
MSA - Mail Submission Agent
MRA - Mail Retrieval Agent
MFA - Mail Filtering Agent
MUA - Mail Usage Agent

Many sysadmins may exclaim at this point "Hang on - I don't remember there being that many elements to email delivery!" and the reason for that is we are now in the post-Marid world of Internet based mail as explained by RFC5598 - 2009. Quite simply: Things are different now. If you are running a mail system that was setup before this time and not updated, chances are that you aren't RFC-compliant to the IETF standard. If you're running MS Exchange out-of-the-box, then you definitely aren't standards compliant. However, making your email system RFC-compliant is the stuff of another article...

RFC5598 divides the various agents into their respective areas of responsibility called "Responsible Actor Roles". These are:

 - User
 - Message Handling System (MHS)
 - ADministrative Management Domain (ADMD)

The traditional flow of email was:

MUA -> MTA -> .... -> MTA -> MUA

Now, the email flow is more commonly:

MUA -> MSA -> MTA -> ... -> MTA -> MFA -> MDA --> MRA --> MUA

where -> is a push operation and --> is a pull operation.

Obviously, in such a system there are a number of elements that can go wrong and be described as "the email system is down".

On email systems I administer, I usually create a dummy account called "Email-Check". At its most basic level, you set it up with an Out of Office reply that says "Email is working". However it doesn't end there. Each point in your message reception system can be setup to respond with diagnostics on each component. A fully working system will received replies from each component in the chain. In the second example, if you send your email to email-check@your-domain and receive a reply from the MTA and MFA, but not the MDA or the MRA, then you can reasonably assume the problem lies with the MDA - that should be the place you start looking.

Practical Examples

MailMarshal

1. Write a rule in MailMarshal that triggers when the to: address is email-check. Have the rule execute as an external command the file "mail-check.cmd" and pass the following parameters to it: servername@domain {ReplyTo} {SenderIP} {HelloName}

2. Write email-check as follows:

@echo off
c:
cd \scripts
echo Email check for [servername] > mmcheck.txt
echo. >> mmcheck.txt
echo.|time|grep current  >> mmcheck.txt
echo.>>mmcheck.txt
echo [Servername] Mail Marshal Service Information >>mmcheck.txt
echo. >>mmcheck.txt
start /wait msinfo32.exe /categories +SWEnvServices /report msinfo.txt
type msinfo.txt | grep MailMarshal >> mmcheck.txt
echo. >>mmcheck.txt
echo Sending IP  : %3 >>mmcheck.txt
echo Helo Name   : %4 >>mmcheck.txt

echo Sending Mail.
bmail -s 127.0.0.1 -t %2 -f %1 -h -a "MailMarshal Check [ServerName]" -m mmcheck.txt > sentmail.txt

Of course, you'll need to source the executables for grep.exe and bmail.exe or provide substitutes in order for this to work.

Postfix / Sendmail

If you are running a postfix or Sendmail, then this job can be done using a milter. A milter is generally written in C, Python or PERL. Personally, I prefer PERL. The way you write your script will depend on your actual setup. I plan on posting a postfix setup example sometime, I'll include a milter for email-check at that time.

Exchange

Unfortunately, dealing with actual messages in Exchange requires an MUA. I don't see any way around this except by setting one up to act on these messages. Technically, there's nothing to stop you running an Outlook client on an Exchange server with autologon (other than sheer common sense that is).

Groupwise

Being a full groupware system, there are a number of ways that Groupwise can respond and react to email messages at the server level. The easiest way is through the Groupwise API engine (GWAPI). The GWAPI can respond to the content in messages and trigger external scripts and is relatively simple to install and configure. The only downside is that ongoing development of the API has ceased since version 5 - so it will essentially run as an external system and only on a Netware server. The next easiest option is to write a Custom 3rd Party Object (C3PO), however that will essentially be an MUA that requires the Groupwise client to be installed. The elegant solution is to create a Trusted Application using the Groupwise TAPI that will directly access the message store.

Lotus Notes/Domino

Any decent Notes system will have at least one programmer managing the Notes/Domino infrastructure. Implementing a script to report on the status of the Domino system should be trivial.

Friday 4 January 2013

OS/2 Obituary

OS/2 version 1 was a dismal failure - that's really all I have to say about that. Version 2.0 had moderate success mainly due to Citrix Winview (the precursor to WinFrame and MetaFrame), however warp server (version 3 through to 4.51) was a spectacular OS.

IBM decided to collaborate with Microsoft in creating OS/2. The original idea was that Windows 3.x would be the desktop OS and OS/2 would be the server OS. As a result there was a fair amount of shared code between the two. At the time, Microsoft didn't have a server/network solution and IBM had Lan Manager. Microsoft also had a deal with Novell that allowed Windows to dovetail into Netware and use the IPX/SPX protocol. Novell and IBM also had their deal which allowed their stuff to interoperate as well. It was all really cozy: Microsoft owned the desktop, Novell owned the network, IBM owned the server. Everyone knew whose turf was whose.

Then a weird thing happened. Microsoft released the Windows 3.1 upgrade and sold 30 million copies in the first two months. That was on the back of 8 million in sales of Windows 3.0 over the previous two years! Microsoft crunched the numbers and decided to write their own server OS and networking system. They dumped the deals with Novell and IBM and decided to write their own server OS and networking protocols.

However the deal with IBM was set in stone. IBM had the rights to nearly all of the windows APIs and in turn, Microsoft owned about 30% of Warp Server. The divorce was a bitter one that (intentionally) delayed the release of Warp Server. But release it did. However another weird thing happened...

On the release date of Warp Server, Windows NT had more press coverage, advertising and editorial space devoted to it than warp server. In fact, nearly twice as much. At this time, NT wasn't even in alpha. It was vapourware! Over the weeks and months that followed, press coverage for warp server declined but NT coverage remained constant. Microsoft simply out marketed Warp Server.

The reality was that warp server was much more capable than even Windows NT 4.0 - which wasn't released until years later.

The irony was that Microsoft made more money per copy of OS/2 that IBM sold than it did from every copy of Windows NT they sold. Essentially, sales of OS/2 Warp Server funded the development of Windows NT.

To buy time, Microsoft release an update to Windows 3.1 called Windows for Workgroups 3.11. This had a very crude networking system called NetBEUI (NetBIOS Extended User Interface). Microsoft simply took NetBIOS (which came to them from the IBM deal) and instead of attaching it to a routable protocol such as IPX or IP, they simply sent it out as a raw broadcast. It was really horrible, but it worked. As a side issue, Novell engineers suddenly discovered that all the great interoperability between Windows and Netware had disappeared. Workarounds were established, but things would never been the same.

The other gain that MS had from IBM was the HPFS file system that IBM developed. MS made a few small changes and called it NTFS.

The deal with Novell and IBM held solid and Novell released a version of Netware that ran as an application on top of Warp Server. This meant that Novell sites (accounting for 87% of networks at the time) could run a single server for both application and networking. And because of the shared code, Windows apps could run on Warp Server. Netware for OS/2 ran with only a 5% overhead when compared to a bare-metal server.

Quite simply, OS/2 Warp Server was better, faster, cheaper and more capable than Windows NT ever was. At the time Windows NT didn't even exist as a product, yet Microsoft cut deals with large organisations and Governments worldwide to install Windows NT and not OS/2. In nearly every case these decisions were made without reference to the technical people in the organisation. Microsoft had worked out that as long as their people are playing golf with your boss, your opinion as an engineer is not going to count very much. IBM relied on the (now waning) adage that nobody ever got fired for buying IBM.

Yet many places DID buy and implement Warp Server. In some cases, it continues to be used. NCR ATMs still use OS/2 Warp Server as do ticketing machines, baggage handling systems, voice mail systems, telecommunications systems and satellite control systems. Warp Server particularly shines in environments where any latency is unacceptable such as real-time systems. OS/2 trained engineers describe Warp Server as "it just works"; meaning it doesn't crash, doesn't need to be restarted on regular basis, doesn't suffer from bottle necks or glitches and doesn't need to be restarted for updates. You install it and it runs for the next ten years.

IBM eventually gave up on Warp Server selling it to Serenity Systems in 2001 where it was renamed eComStation. The latest version is 2.1GA (General Availability) which was released in 2011. Sales are low and Serenity Systems allows you to download it for free. It will run virtualised in Oracle Virtual Box.

As a side irony, about ten years ago a company in Russia wanted to run Warp Server virtualised. Vmware  couldn't do the job at the time, so they hired some programmers and created a new company to write the virtualisation software. They named the company Parallels Inc.

There is a project called OSFree to recreate Warp Server as an Open Source OS.

Wednesday 2 January 2013

Wildcard email addresses in MailMarshal

A few years back, there used to be a free service that enabled you to generate unique email addresses that would redirect to a single email account. This was great for web forms that required a valid email address. You would generate an email address for that particular website and disable it if they started spamming you.

Well, like all great "free" services, it eventually became "non-free", so that was the end of that. However, with a little ingenuity, it is possible to get MailMarshal to do something similar. Here's how:

Grammar

Firstly you need to identify the specific grammar of your email addresses, develop a secondary grammar for the wildcard addresses and then make sure there are no "collisions" in the grammar. For example, most organisations have their email addresses conforming (more or less) to the following grammar:

<first_name>.<last_name>@domain_name

That being the case, you can then define your wildcard grammer to be as follows:

<first_name>.<last_name>.<wildcard>@domain_name

Create Wildcard Group

The second step is to create a MailMarshal group called "Email Wildcard" and place the user names of everyone who will be using a wildcard, plus an entry with .* for every user as well. For example

joe.bloggs@example.com
joe.bloggs.*@example.com

It is possible to dispense with this step, however the group requirement gives you more control.

Rule 1

Some preliminary work is required here:

Firstly, create an external command called "Echo Old address to Threadno File". This is necessary because Mailmarshal can only work with non-SMTP header fields. You can work with the TO: field, but not the RCPT TO: field, which (unfortunately) is where the real stuff happens. So, we need to directly modify the text of the email outside of MailMarshal.

The external command will have the following properties:

Command Line: cmd
Parameters: /c echo {Recipient}>"{Install}\Address{Threadno}.txt
Tick the "single thread" and "execute once only" boxes. Leave the rest unchanged.

All this command will do is write the actual recipient email address to the file. Left like this, it will do nothing. We need to modify it later.

Next we need to write the rule to look for the email wildcard. This is done using the following Header-Match rule that needs to be manually defined as follows:

Match against: TO: CC: BCC:
Field parsing method: email addresses
Field search expression: (.+)\.(.+)\.(.+)@domain_name

For informational purposes, create a message stamp to indicate the email is from a wildcard source:

-----
Wildcard email address - {Recipient}


Next, a header rewrite rule needs to be created called "Address Change" as follows:

Match Against: X-AddressChange:
Field Parsing Method: Entire Line
Insert If Missing: Address Change

This will add the field X-AddressChange to the header indicating the email address has been changed and set us up for Rule 2. The complete Rule 1 will look as follows:

Standard Rule: Email Wildcard deletion - Rule 1
When a message arrives
Where message is incoming
Where addressed to 'Email Wildcard'
Where message contains one or more headers 'Email Wildcard'
Run the external command 'Echo Old address to Threadno File'
    And write log message(s) with 'Email Wildcard'
    And stamp message with 'Email Wildcard Stamp'
    And rewrite message headers using 'Address Change'
And pass message to the next rule for processing.


Rule 2

This rule is a lot simpler, it simply looks for the X-AddressChange field and then rewrites the email address to remove the wildcard.

The Header Match rule needs to be defined to look for "X-AddressChange" with the search expression '.+'

The Header Rewrite rule will be as follows:

Matching Fields: To:, Envelope Recipient:
Field Parsing Method: Email addresses
Field Search Expression: ^(.+)\.(.+)\.(.+)@(.+)
Substitute into field using expression: $1\.$2@$4
Enable Field Changes: (ticked) 

The final rule will be as follows:


Standard Rule: Email Wildcard deletion - Rule 2
When a message arrives
Where message is incoming
Where message contains one or more headers 'X-AddressChange Exists'
Rewrite message headers using 'Email Wilcard Deletion'
And pass message to the next rule for processing.


Rule 1 will stamp the message so you will know the original address used. If you start receiving spam from the source, add it to your blacklist of recipients. For example, suppose you sign up to a site using joe.bloggs.website1@example.com as your email address and you start being spammed by them. Add the address to your recipient blacklist and the spam will stop, however your regular email will still be delivered.

It is possible to add a "Rule 1.5" to add the recipient to the subject line - that way you can sort your emails by subject line. The rule would be very similar to Rule 2.

This is just one example of how you can push the boundaries of what MailMarshal is capable of by using external commands.