Creating Engagement within Online Lectures – No More PowerPoint

Introduction

One of the great thing about academia is that there are so many different teaching styles, and students get exposed to a range of presentation methods. The academic environment is changing, though, and the traditional lecture is under pressure from many angles. While the presentation of fundamental concepts must still be a core foundation of a module, the availability of cloud storage and on-line video channels now allow academics with a way to present their material, so that students can catch-up with it in their own time. This then allows for new teaching methods to occur within the traditional lecture environment, such as with question-answer voting, or for two-way discussions between the lecturer and the class. While key principles will still be presented in the lecture, a more dynamic teaching environment may benefit students, and move away from a talk-and-chalk style. In the future, the statement by the academic for “laptops-off” might become “laptops-on“. If this happens, academics must guard against the lecture becoming a place where the provision of key academic principles becomes limited, and that we move towards creating an environment where students are engaged for the lecture, but walk away having learnt little. Education often requires enforcement, and to re-investigate and study with a degree of rigor (which is the reason that we focus students on exams and courseworks).

Microsoft PowerPoint has been the friend of the academic for over 20 years, but the Internet and YouTube are catching-up with it fast. There are also many changes in academia, including the support for distance learning methods, and also in terms of using on-line lectures to actively recruit new students. On-line lectures, especially ones which are delivered through YouTube, can be now be used as a powerful marketing tool for universities, and provides candidates with a “taster” of the content on courses.

Figure 1: Some of the problems with PowerPoint-type lectures

Common failings

The shortcomings of the PowerPoint type presentation are obviously to many including with these styles:

  • Reading from the slides. This is often a trap that researchers fall into (especially when they are not confident in their work) and, surprising in job interviews, where candidates reel-off some details of the brief they have been asked for (and is often a reason that someone doesn’t actually get the role).
  • Creating too much animation. The flow of the presentation is important, and it doesn’t take too much to knock someone of their stride. Many presenters regret adding animation, especially when revealing a list of options, especially if it goes too slow, or too quickly.
  • Too detailed. A common failing is to add too much text, especially when it is too small. This is often compounded by the first problem, where the presenter reads from the small text on a slide. If you present like this at a job interview – you will not get the job!
  • No flow and breaks. The audience needs to know where they are in a presentation, and what the main objective of the slides are, and how they fit together. There is nothing worse than banging through without any breaks, and not recapping or re-focusing. A common technique to improve this is to use keyframes, which allow the presentation to break and define a new subject area/focus.
  • Clipart overload. Clipart has improved over the years, but some of the clipart from the past often does little to enhance a presentation.
  • Too many changes between slides. With this the presenter continually bombards the audience with lots of slide changes. A good approach is to try and abstract the content into single diagrams, which be talked around.

Often the best teachers draw things, and avoid text, apart from annotating diagrams, so online lectures which tell a story, and which abstract the concepts are more engaging than text based ones with bullet points.

History of PowerPoint

While many think that PowerPoint has its roots in Microsoft Windows, it was originally developed by Forethought, in 1987, for the Apple Mac, and named “Presenter”. It then renamed “PowerPoint” to overcome trademark restrictions, after which Microsoft bought the company for $14 million USD, and officially launched it on 22 May 1990. A key update occurred with PowerPoint 97 with the integration of Visual Basic for Applications (VBA) which supported the scripting of events. Until recently it held over 95% of the presentation software market share, and installed on over 1 billion computers. Its focus has always been on static slides, with guides for the presenters defined with bullets. While PowerPoint has led us down certain ways of doing presentations, there’s a need for new methods to fully engage audiences (who may be live or watching remotely).

Towards a dynamic presentation

sparkol-videoscribe-logoMany of the best teachers use a method of drawing abstract diagrams, and this is used as a focal point for students. In the days of blackboards, teachers would continually draw things on the board (and many even wrote on the blackboard for pupils to write down). This style has not really translated to PowerPoint, which tends to have a static mode of transmission. While it is possible to draw on a slide, it still looks static, and often fails to engage the watcher.

The methods used to Video Sparkle possibly show one way towards providing an engaging presentation, where the presentation adds a voice-over, onto a scripted presentation. Figures 2 shows an example where a difficult subject (Cryptography) has been created to make it more engaging. Within this, just as a teacher would when drawing on a whiteboard, there is a continual movement and drawing around the key concepts. You will also see that there are only five main screens for the key concepts, and then the story is told around this. The key elements of this type of method are to:

  • Provide focus around the key subject topics.
  • Provide a focal point around the key topics.
  • Highlight where the viewer should be looking.

Figure 2: Example Video Scribe presentations

Going digital

There are many ways that academics have been using to go digital with their lectures. While some have went purely for audio, others have been able to record their lectures with a “talking-head” presentation. With a “talking-head” method, there is normally a fairly significant investment in recording the lecture, which can have audio problems, and where the presentation does not quite capture the excitement of the lecture situation. These lectures are often recorded live with students, which makes it difficult for the lecturer to act normally, and will often add-in things that are not quite required, or that would be irrelevant for future years. They can this date quickly.

The voice-over-a-presentation is another method of putting lectures on-line, and often have a defined script, and where key frames are used to pause the presentation, and re-start. This allows academics to splice-together a presentation, and re-record parts of it, and splice it into the presentation.

Conclusions

A great debate has occurred around the access to on-line material, where some universities hold the lecture material within their own infrastructure, but there are great opportunities in terms of engaging with a wide range of stakeholders, especially with the general public, in putting lectures on-line, such as through YouTube. This also has the advantage of 24×7 support for the on-line material, and allows for it to be access from any location on a range of devices. With an increase in the amount of content available to students, universities must look at new ways to engage, and showcase their quality of their academic material, and VideoScribe type lectures is just one technique that they could turn too.

Analysis of Trends in Scottish Independence Vote – Big Data Analysis of Bookmarker Odds

Introduction

Fotolia_64516645_SThere has been a great deal of debate about the usage of opinion polls, especially as they can only take a narrow viewpoint, and then to scale this onto the sentiments of the majority. While this might work well in general elections, the polls around Scottish Independence might not scale in the same way as a general election.

One sector of economy that will often win in the end, and predict things better than most, is the betting industry. So, with one of the most important events in the history of Scotland, let’s look to the bookies to give some insights on what is actually happening on the ground.

The data used in this analysis looks back at the daily odds for a Yes vote from 23 bookmakers over the last five months (1 April – 2 September 2014). It is an original analysis, and while not exactly big data, it provides pointers towards the gathering of many data points, and which aggregate back into the calculations of the odds. Bookmakers generally monitor a wide range of communication channels, along with polls, and should give an up-to-day analysis of the vote.

Today’s analysis (3 September 2014)

Today’s average for Yes Vote (3 September 2014): 4.06 (down from 4.32)  14 with odds shortening, 0 with odds lengthing. 

Today’s average for No Vote (3 September 2014): 1.22 (up from 1.21)

Lowest odds for Yes Vote: 3.8 [14/5] (PaddyPower)/3.75 [15/4] (SpeadEx).

Highest odds for No Vote: 1.25 [1/4] (SkyBET/PaddyPower).

Outline of odds

In the independence poll, there are only two horses in the race, so there is either a Yes or a No bet. The way that odds are normally defined is the fraction which defines the return, so Evens is 1/1, where for every £1 bet, you will get £1 back in addition to your stake (so you get £2). If the odds are 2/1 (2-to-1 against), you get £2 back plus your stake (so will get £3 on a win). For 1/2 (or 2-to-1 on), you get half your money back, and you’ll get £1.50 on a win. These types of odds are known as fractional odds, where the value defines the fraction for your payback. The multiplier, though, does not show your stake coming back to you, so decimal odds are used to represent this, and defines a value which is multiplied to the stake to give the winning amount (basically just the fractional odds plus 1, and then represented as a decimal value).

The factional odds value of Evens gives a decimal odds value of 2 (where you get £2 back for a £1 stake), and 2/1 (2-to-1 against) gives 3.0, while 1/2 (2-to-1 on) is 1.5. In terms of roulette, Evens would define the odds for a bet of Red against Black (as each are equally probable). In roulette, though, the odds are slightly biased against the player for a Red v Black bet, as 0 changes the odds in favour of the casino. For betting, overall, bookmakers try to analyse the correct odds so that they have attractive ones (if they want to take the best), against others. If they take too much of a risk, they will lose, so their odds around the independence vote should be fairly representatives of the demand around bets, and the current sentiment around the debate.

Variation of the last five months

So first let’s look at the odds on a Yes vote since the start of April 2014 (Figure 1) until 1 September 2014. One thing we noticed is that generally, the odds of the major bookmakers were fairly consistent with each other, but a new trend is occurring where there is a larger difference between them, with the lowest (at 1 September 2014) set a 4 for CORAL and Betfred, up to 5.46 for Betfair. There’s also a general trend downwards for the No vote, which possibly reflects the recent opinion polls.

Figure 1: Odds on a Yes vote (using decimal odds)

If we now look at the average odds over the period, we can actually see quite a variation between the bookmakers, and possibly related to their risk exposure for their bets. In Table 1 we can see that Betfair and Titanbet have offered the highest average odds, where others have played a little more safe, such as 32Red and CORAL. Those serious about putting on a bet on the vote, will obviously look for the best bets, and bookmakers will thus try to beat the rest, without over exposing themselves.

Table 1: Averages for odds on Yes Vote and Standard Deviation

  Average Std Dev
bet365 4.5 1.01
SkyBET 4.8 0.88
totesport    
Boylesport 4.7 0.82
BETFRED 4.9 1.01
Sportingbet 4.1 0.74
BetVictor 4.4 0.74
Paddypower 4.9 0.88
StanJames 4.3 1.01
888.com 4.3 0.72
Ladbrookes 4.4 0.74
CORAL 4.1 0.7
WillHill 4.6 0.98
Winner 4.2 0.8
SpeadEx 5 0.77
Betfair 4.7 0.7
betway 4.4 0.8
Titanbet 5 0.58
Bwin 4.8 0.87
Unibet 4.4 0.67
32Red 3.4 0.69
betfair 5.2 1.23
BETDAQ 4.5 1.09

Changes of odds

Odds will vary over time, but the variation of the odds depends on how reactive they are to changes. Figure 2 shows the variation of changes over the past four months (1 April 2014 – 31 August 2014). We can see that some bookmarks, such as SpeadEx and 32Red have seen large-scale changes, often changing over 10 times in a single day, where many others have changes ever few days. Titanbet, for example, have only changed over four times over the last five months (an average of less than once a month). The large number of changes for SpreadEx can be explained by them being a spread betting company, where punters can buy and sell their bets from others, and will thus see fluxuations like a stock market. With spread betting around the Independence vote, punters are looking for any pointers to move their bets, either to purchase when odds low, and sell when they are high – the same way that the stock exchange agents but stock when low, and sell when high.

odds11Figure 2: Number of changes in odds over the last four months

The Trend

The trend towards a narrowing of the vote is also reflected in the Yes vote odds over the past 20 days (FIgure 3), where we can see there has been a drop of 1.9 from 24 August to 31 August. The current average is 4.37, and if this trend continues for the next two weeks, the Yes Vote will sit at 2.47, which will be almost Evens.

Figure 3: Yes odds for the past 30 days

If we look at odds for a Yes Vote for August 2014 (Figure 4) we can see that the odds where generally drifting out in the first part of August (from around 5 to over 6), and have started to come back in, in the second half of the month (from over 6 to a bunching between 4 and 5.5). Generally the turning back seemed to have happened around the time of the second debate (22 August 2014). One thing that is interesting in Figure 4 is that 888.com fell to 4, while most others where above 5.5. Remember that if the odds drift in for a No Vote, they will drift out for a Yes vote, so perhaps there was a reassessment of the No vote odds, or there was a major bet on No (which would lengthen the odds for a Yes). Over the few days after this, the odds from 888.com for a Yes Vote jumped to over 5.5.

Figure 4: Yes Vote odds for 23 bookmakers over August 2014

The key changes

A key change took place around the 5 August debate, where the odds for a Yes vote odds had been dropping before the debate, but after it, the average odds for a Yes Vote moved steeply up (dropping one point in the days before the debate, and then rising 1.2, for four days after it):

9 Aug 5.75
8 Aug 5.43
7 Aug 5.05
6 Aug 5.05
5 Aug 4.54
4 Aug 4.39
3 Aug 5.5

On the day before the second debate there was a peak Yes Vote odds of 6.3 (22 August 2014), and this has since fallen to nearly 4 ( 31 August 2014). The largest changes in the odds have thus occurred around the debate points, with 5 August 2014 having 31 changes (where general the No vote odds have drifted out) and 9 changes on 23 August 2014. Also, typically, there is an increasing rate of change of the odds as we move closer to the vote.

Where are we now?

The current average betting odds, on 2 September 2014, for a No vote is 1.22, and for a Yes vote it is 4.06, which are equivalent to 1/5 for No (5-to-1 on), and about 3/1 (7-to-2 against) for Yes. The No vote, at one point, on 11 August was 1.1, which is 1/10 (Betfred and SpreadEx), and which is the kind of betting that you would get for Glasgow Celtic v Linlithgow Rose. This has now drifted in to 1/5, with a current trend to continue to drift in.

At 1/10, the vote seemed certain for a No, and they wanted to limit the number of punters putting money on a one-horse race, but now the bookmakers are not so sure, and can see benefits in reducing their No vote odds, and take on bets, where punters can gain a fifth of the money back for betting No, while at 1/10, they would only get one-tenth back seems like to big a risk.

The Yes vote drifted to 6.5 (11/2), after the first debate, but is now back between 4 (3/1) and 4.5 (7/2).

So, on 31 August 2014, we are at 5-1 on for No and around 3-1 against for Yes. In terms of Scottish football, for bookmaker odds, this is now Celtic v Ross County at Celtic Park, with the teams playing until there is a result.

Screen Shot 2014-08-31 at 22.18.05Conclusions

This analysis has looked at bookmaker odds for the last four months, and identified some of the trends. An important one is that the odds for Yes Vote drifted out to 7 (6/1), and have now come back in and, at 1 September 2014, sit at the lower odds of 4 (3/1), and seem to be reducing by the day.

As we get closer to the end, there are many analysts focusing on understanding the way that things are moving, and none are more focused than punters and bookmakers, so keep a watching on the betting market, as they are more likely to get it right than anyone else.The next few days are likely to show how the betting market will go, especially in where bookmakers are generally feeling how the trends are going, and for punters to decide when best to place their bets.

The current trend is downwards for the odds on a Yes Vote, and a continuation of the current trend would be approaching Evens for the vote. It could be that the downward trend for the odds will stop, but bookmakers are certainly do not want to take risks in defining their odds. The next few days should show if this trend continues.

Key observations:

  • Odds have generally drifted out for the Yes vote, but are coming back in – now siting, at a minimum, at 4 (or 3/1) for Yes. There is a trend that puts the betting at Evens at the time of the election, although there may be a bottoming-out of the trend in the next few days.
  • Some bookmakers, especially the spread betting ones have rapid changes in odds, while other are static (with one bookmaker making changes in the odds one a month).
  • The largest number of changes in a single day, over the past five months, was 31 and occurred on 5 August 2014, which was the date of the debate between Alex Salmond and Alistair Darling.

At 1/10 (1.1), there was very little incentive for punters to put money on a No vote, and bookmakers were identifying it as a one-horse race, but at 1/5, there’s much more of a bet to be placed, and the bookmarkers are hightlighting that it’s now back as a two-horse race.

A key breakpoint is whether the Yes vote odds can break the 4 (3/1) barrier (with No at 1/5), or that it will settle at this figure. As of today (1 September 2014), the average for Yes Vote is 4.32 (down from 4.37) with 9 bookmakers with odds shortening, and one with odds lengthing.

Keep watching this page, as it will be updated with the daily trends ..

Note: This is a non-political analysis, and is purely focused on analysing open-source data related to bookmaker odds. It is inspired by the usage of big data analysis, and how this identifies trends.

DDoS, Botnets, Phishing and RATs – the Cyber weapons and army of choice

Introduction

The Internet was conceived as a distributed network where there were multiple routes that data packets can take to get to the destination. It was also created without the controls of any organisation or government, and thus has been difficult to regulate. The strength, of course, is that there is access to content from around the World, without the control of governments on its access. Within any political agenda, there are those, especially at the extremes of the policital divide, those who want to limit access to content which they see as dangerous. Governments have generally controlled the access to information by monitoing their physical borders, in order to limit access to content which could do damage to the national state. The openness of the Internet, though, can also expose organisations to large-scale cyber threats.

Cyber Attacks on the Finance Industry

The problem around the openess of the Internet was highlighted last week by the US authorities who identified that there was a wave of cyber attacks on American financial institutions, including JPMorgan Chase, with the intention to either steal data or disrupt their operation. Previous attacks focused on Goldman Sachs, Morgan Stanley, Bank of America, Citigroup and Wells Fargo.

As the finance industry becomes more dependent on its information infrastructure, the risks to these organisations, and to the world economy also increase. In times-gone-past, the finance industry used dedicated leased lines for their communications, but these are expensive, and many organisations have move to use the Internet for their communications, and even to the public cloud infastructure to store and process their transactions. Often, though, they use encrypted channels to transmit data over public networks, but it is their connections to the Internet that can provide a hook for attacks.

The US Treasury, as has the Bank of England, have identified that cyber threats are a key focus, and that organisations need to work together to defend against a range of threats, including from foreign governments, such as one theory related to retaliation of the Russian government against US sanctions over the crisis in Ukraine. Other possible motivations focuses on cyber criminals and hacktivists.

External Exposure

Whenever an organisation connects to the Internet, it automatically is exposed to an external threat. This could just be a little touch-point, but it gives a point of attack against the organisation. These touch-points are addressable through a public IP address, and ever system that goes on-line requires a public IP address, so although organisations can hide away much of their infrastructure, they must allow some network traffic to come though, and the challenge remains as to how to allow network traffic out, and only allow the validate data back in.

With the Internet, we now have major infrastructure of zombie agents, who can be taken control of, and lead an attack any organisation or defence infrastructure of the zombie master’s choice. If we add in the possibility of using The Onion Routing (TOR) network, there are many opportunities for cyber warfare by proxy.

The attack against the infrastructure of an organisation with DDoS is only one method that can be used to disrupt its operations. Other recent attacks have focused on external systems, such as related to the domain name registrar, the Domain Name Service (DNS), or any other part of the critical infrastructure.

DDoS and Botnets

This year (2014) has actually seen more DDoS attacks than ever before, with a doubling of the high-end attacks over the year, and with over 100 attacks peaking at more than 100Gbps. The current highest attack was against a Spanish site, where the NTP (Network Time Protocol) was used to bombard the Web infrastructure. With this the intruder makes requests from compromised hosts to a NTP server for the current time, but uses the destination target as the return address for the request. Overall the protocols used on the Internet are not designed with security in mind, thus it is possible to use a different destination address to the one that actually made the request. This specific attack peaked at 154.69Gbps, which is more than enough to bring any network down. The key target is to exhaust networked resources, such as the interconnected devices, the bandwidth for the connections to the Internet, and the CPU of the servers.

The reason that DDoS is often successful is three-fold:

  • Difficult to differentiate between good and bad traffic. Overall the Internet has been created by some extremely simple protocols, which were not designed with security in-mind. Thus it is extremely difficult to differentiate good traffic from bad traffic. Normally organisations throttle back when they are under attack, by not accepting new connections, and waiting to the existing connections have been broken.
  • Tracks are obfuscated. With reflect attack, the target becomes an intermediate device, where it is difficult to trace the actual source of the attack. With networks such as Tor, the intruder can further hide their tracks.
  • Zombie nodes used in the attack. There are many compromised hosts on the Internet, including those compromised with the Zeus botnet. Each of these can be controlled, and used to attack the target.

DDoS is now often used as a method of protest such as against the St. Louis County Police’s involvement in the killing of unarmed teenager Michael Brown in Ferguson, Mo, there was a DDoS (Distributed Denial of Service) attack on the police Web site, and which brought down the attack for several days. Overall it made a strong statement, and which the authorities could do little about it. Along with this, the group responsible, who declared links to Anonymous, outlined that they had hacked into the St. Louis County Police network, and gained access to dispatch tapes related to the day of the shooting, which they then uploaded to YouTube.

Domain Name Registrar compromise

On 22 June 2014, the SEA (Syrian Electronic Army) showcased how it was possible to compromise a key element on the trusted infastruture (by changing the IP address mapping for the domain for Reuters), and used this to display a page which stated:

Stop publishing fake reports and false articles about Syria!UK government is supporting the terrorists in Syria to destroy it. Stop spreading its propaganda.

The target, though, was not the Reuters site, but on the content it hosted, and which is used by many other media outlets. This has happened in other related hacks on sites, such as with the New York Times, where the SEA went after the domain name servers of the New York Times and Twitter, though the registry records of Melbourne IT. Thus when a user wanted to go to the New York Times site, they were re-directed to a page generated by the SEA.

In the case the web advertising site Taboola was compromised, and which could have serious consequences for their other clients, who include Yahoo!, the BBC and Fox News. With the increasing use of advertising material on sites, it will be great worry to many sites that messages from hacktivists could be posted through them. Previously, in 2012, Reuters was hacked by the SEA (Syrian Electronic Army) who posted a false article on the death of Saudi Arabia’s foreign minister Saud al-Faisal.

In a previous hack on The Onion, the SEA used one of the most common methods of compromise: a phishing email. With this a person in the company clicked on the malicious link for what seemed to be a lead story from the Washington Post story. Unfortunately it re-directed to another site and then asked for Google Apps credentials. After which, SEA gained access to the Web infastructure and managed to post a story.

It is possible that this attack on Reuters is based on this type of compromise, as it is fairly easy to target key users, and then trick them into entering their details. Often the phishing email can even replicate the local login to an intranet, but is actually a spoofed version. In the case of The Onion, SEA even gained access to their Twitter account.

In classic form, The Onion, on finding the compromise, posted and article leading with:

Syrian Electronic Army Has A Little Fun Before Inevitable Upcoming Death At Hands of Rebels.”

While it took a while for The Onion to understand what had happened on their network, Reuters detected the compromise, and within 20 minutes the content had been fixed.

RATs

The external threats typically involve attacking the information infastructure and can be seen from network traffic coming into the network. At present there are a whole host of security products and devices which aim to protect the infastructure against the attacks, but the preferred option for an intruder is to get over the external security defence, and setup a hook within the network – and become an insider threat.

Sometimes the threats are thus both internal and external, such as where the Syrian Electronic Army (SEA) focusing on communications websites, such as Forbes and, possibly, CENTCOM, where as the The Syrian Malware Team (STM) has been using a.NET based RAT (Remote Access Trojan) called BlackWorm to provide a method of gaining a hook into the organisation. Once an intruder is within an organisation, the firewall can have little effect on their operations. The STM team seems to be pro-Syrian government, such as with banners featuring Syrian President Bashar al-Assad.

Conclusions

Like it or not, we are moving to the point where we are becoming increasingly dependent on the Internet, and it has not been constructed in a way which supports defence mechanisms that national border used to provide us. The threats to our organisations and critical infrastructure increases by the day, and tools available to adversaries are in the hands of anyone who wants them. At one time an attack on a nation state required considerable investment, in order to build an army and get some weapons. But now there is a zombie army already to be taken over on the Internet, and where the tools are available as source code, which can be easily changed and moved to places that makes it difficult for law enforcement professionals to get access to.

 

Rise of Hacktivism: Attacks against Virtual Infastructure is increasely a Tool-of-choice for Protesters

Introduction

In an era with an always-on connectivity, protesters can be a strong statement against an organisation by bringing down its information infrastructure. It is something that can make front page news stories, and becomes the equivalent of protesting from a-far, with very little chance of being traced.

So, as a protest against St. Louis County Police’s involvement in the killing of unarmed teenager Michael Brown in Ferguson, Mo, there was a DDoS (Distributed Denial of Service) attack on the police Web site, and which brought down the attack for several days. Overall it made a strong statement, and which the authorities could do little about it. Along with this, the group responsible, who declared links to Anonymous, outlined that they had hacked into the St. Louis County Police network, and gained access to dispatch tapes related to the day of the shooting, which they then uploaded to YouTube.

Why is DDoS so successful?

This year (2014) has actually seen more DDoS attacks than ever before, with a doubling of the high-end attacks over the year, and with over 100 attacks peaking at more than 100Gbps. The current highest attack was against a Spanish site, where the NTP (Network Time Protocol) was used to bombard the Web infrastructure. With this the intruder makes requests from compromised hosts to a NTP server for the current time, but uses the destination target as the return address for the request. Overall the protocols used on the Internet are not designed with security in mind, thus it is possible to use a different destination address to the one that actually made the request. This specific attack peaked at 154.69Gbps, which is more than enough to bring any network down. The key target is to exhaust networked resources, such as the interconnected devices, the bandwidth for the connections to the Internet, and the CPU of the servers.

The reason that DDoS is often successful is three-fold:

  • Difficult to differentiate between good and bad traffic. Overall the Internet has been created by some extremely simple protocols, which were not designed with security in-mind. Thus it is extremely difficult to differentiate good traffic from bad traffic. Normally organisations throttle back when they are under attack, by not accepting new connections, and waiting to the existing connections have been broken.
  • Tracks are obfuscated. With reflect attack, the target becomes an intermediate device, where it is difficult to trace the actual source of the attack. With networks such as Tor, the intruder can further hide their tracks.
  • Zombie nodes used in the attack. There are many compromised hosts on the Internet, including those compromised with the Zeus botnet. Each of these can be controlled, and used to attack the target.

The Rise of Hacktivism

As we have seen in Russia’s suspected cyber attack on Web sites in Estonia, and in the Arab Spring uprising, the Internet is playing an increasing part within conflicts around the World. Thus as we move into an Information Age, the battle field of the future is likely to be in Cyber Space, along with this it will also be the place where nation states will struggle to control news outlets.

A cause or a fight?

Organisations need to understand that there are new risks within the Information Age and there are new ways to distribute messages, especially from those who are skillful enough to be able to disrupt traditional forms for dissemination. Thus Hacktivism can become a threat to any nation state and organisation (Figure 1).

Slide3Figure 1: Security is not just technical, it is also Political, Economic, and Social

The important thing to note about Hacktivism is that the viewpoint on the Hacktivist will often be reflected on the political landscape of the current time, and that time itself can change this viewpoint. While Adolf Hitler and Benito Mussolini are still rightly seen as terror agents, Martin Luther King and Mahatma Gandhi are now seen as freedom fighters. Thus viewpoints often change and for some the Hacktivist can have the image of a freedom fighter.

Slide6Figure 2: Hacktivism

Big v Little

The Internet supports a voice for all, and there are many cases of organisations and national states upsetting groups around the World, and where they have successful rebelled against them. In 2012, Tunisian Government web sites were attacked because of WikiLeaks censorship, and in 2011, the Sony PlayStation Network was hacked after Sony said they would name and shame the person responsible for jail breaking their consoles (Figure 3). It can be seen that just because you are small on the Internet, doesn’t mean you cannot have a massive impact. Sony ended up losing billions on their share price, and lost a great deal of customer confidence.

Slide7Figure 3: Hacktivism examples

HBGary Federal

The HBGary Federal example is the best one in terms of how organisations need to understand their threat landscape. For this Aaron Barr, the CEO of HBGary, announced that they would unmask some of the key people involved in Anonymous, and contacted a host of agencies, including the NSA and Interpol. Anonymous bounced a message back saying that they shouldn’t do this, as they would go after them. As HBGary were a leading security organisation, they thought they could cope with this and went ahead with their threat.

Anonymous then searched around on the HBGary CMS system, and found that a simple PHP request of:

http://www.hbgaryfederal.com/pages.php?pageNav=2&page=27

give them access to the complete database of usernames and hashed passwords for their site. As the passwords were not salted, it was an easy task to reverse engineer the hashes back to the original password. Their target, though, was Aaron Barr and Ted Vera (COO), each of which used weak passwords of six characters and two numbers, which are easily broken.

Now they had their login details, Anonymous moved onto other targets. Surely they wouldn’t have used the same password for their other accounts? But when they tried, the can get access to a while range of their accounts using the same password (including Twitter and Gmail). This allowed Anonymous access to GBs of R&D information. Then the noticed that the System Administrator for their Gmail Email account as Aaron, and managed to gain access to their complete email system, and which included the email system for the Dutch Police.

Slide9Figure 4: Access to email and a whole lot more.

Finally they went after their top security expert: Greg Hoglund, who owned HBGary. For this they send him an email, from within the Gmail account, from a system administrator, and asking for confirmation on a key system password, of which Greg replied back with it. Anonymous then went onto compromise his accounts, and which is a lesson for many organisations. While HBGary Federal has since been closed down, due to the adverse publicity around the hack, the partner company (HBGary) has went from strength-to-strength, with Greg making visionary presentations on computer security around the World.Slide10Figure 5: Greg’s compromise.

Conclusions

A key factor is in these types of attacks, is that, when not prepared, the complete infastructure can fall like a house of cards. In Ferguson, the email system also went off-line for a while, and to protect themselves from data leakage, they took down all personal information their site.

The protection of IT infrastructures against DDoS is extremely difficult, and organisations need to understand how they will cope with these types of attacks. Along with this, many organisations are even more proactive, and actively listen to the “buzz” around hacking events on the Internet, in order to put in-place mitigation methods. Often it’s a matter of coping with the attack, and enabling new network routes and virtualised devices to cope with the attack while it happens.

Overall it is a difficult debate, and one person’s cause is another fight, but the technological challenge remains, and it is one of the most difficult faced by IT architectures, and is often costly to deal with.

Hacking Traffic Lights and the Internet of Things – We Should All Beware of Bad Security!

tr1Introduction

As we move into an Information Age we becoming increasing dependent on data for the control of our infrastructures, which leaves them open to attackers. Often critical infrastructure is obvious, such as the energy supplies for data centers, but it is often the ones which are the least obvious that are the most open to attack. This could be for an air conditioning system in a data centre, where a failure can cause the equipment to virtually melt (especially tape drives) or in the control of traffic around a city. As we move towards using data to control and optimize our lives we become more dependence on it. Normally in safety critical systems there is a failsafe control mechanism, which is an out-of-band control system which makes sure that the system does not operate outside its safe working. In a control plant, this might be a vibration sense on a pump, where, if it is ran too fast, it will be detected, and the control system will place the overall system into a safe mode. For traffic lights there is normally a vision capture of the state of the lights, and this is fed back to a failsafe system, that is able to detect when the lights are incorrect. if someone gets access to the failsafe system, the can thus overrule safety, and compromise the system. This article outlines a case where this occured, and some of the lessons that can be learnt from it.

Traffic Light Hacking

So, to prove a point, security researchers, lead by Alex Halderman at the University of Michigan, managed to use a laptop and an off-the-shelf radio transmitter to control traffic light signals (https://jhalderm.com/pub/papers/traffic-woot14.pdf). Overall they found many security vulnerabilities and managed to control over 100 traffic signals within Michigan City using a single laptop. In order to be ethical in their approach the gained full permission form the road agency, and made sure that there was no danger to drivers. Their sole motivation was to show that traffic control infrastructure could be easily taken over.

Overall they found a weak implementation of security with the usage of open and unencrypted radio signals, which allowed intruders to tap into their communications, and then discovered the usage of factory-default usernames and passwords. Along with this there was a debugging port which could be easily compromised.

In the US, the radio frequency used to control traffic lights is typically in the ISM band at 900 MHz or 5.8 GHz, which makes it fairly easy to get equipment to communicate with the radio system. The researchers used readily available wireless equipment and single laptop to read the unencrypted data on the wireless network.

Figure 1 provides an overview ofthe control system where the radio transmitter provides a live feed (and other sensed information) to the road agency.The induction unit is normally bured in each of the junctions, and detect cars as the pass over it, and the camera is used to watch the traffic lights, and feed the colours of the lights back to the controller. In this way there is a visual failsafe.

Overriding the failsafe

The MMU (Malfunction Management Unit) is the failsafe operator on the system and ensures that the lights are not put into an unsafe state (such as for Red and Green at the same time), and the lights are then adjusted using the information gained from the induction loops in the road (and which senses cars as they pass over it). If control can be gained to the MMU, and allow for access to the controller, the lights can be compromised to go into incorrect states, or to stay at steady red (and cause a grid lock within a city). Within the MMU controller board, the researchers found that by connecting a jumper wire, the output from the controller was ignored, and the intersection put into a known-safe state.

trafnewFigure 1: Overview of traffic control system

Same old debug port

A typical security problem in many control systems is that there is often a debug port, which gives highly priviledged access to the system. Within this compromise, the researchers found that the control boxes ran VxWorks 5.5, which leaves a debug port open for testing. They then sniffed the packages between the controller and the MMU, and found that there was no authentication used, and that the messages were not encrypted and can be easily viewed and replayed. This allowed them to reverse engineer the messaging protocol for the lights. They then created a program to activate any of the buttons witin the controller and display the results, and then even to access the controller remotely. In the end they managed to turn all the lights in the neighournood to red (or all green on a given route – in order to operate safely within the experiment).

DDoS

Finally they found that the units were suspectiable to a denial-of-service (DoS) attack, where continual accesses with incorrect control signals over the network, caused the malfunction management unit to put the lights in a failure state (all red). In this way the system failed to cope with excessive traffic, and all the units would end up failing with this type of probe.

 

Conclusions

This vulnerability showed all the standard signs of the bad integration of security, and which is common in many systems, where security is not thought of as a major concern. This is not a small scale issue, as the researchers identified that this type of system is used in more than 60% of the traffic intersections in the US. If a malicious agent wanted to bring a city, or even a country to its knees, they could just flip a switch … and there is no road transport system, which can then cause chaos to the rest of the infrastructure. We really need to think the way that systems are designed, and probe them for their vulnerabilties.

The researchers in this study have already got other easy targets in their sight such as tapping into the public messaging systems on freeways, and into the infastructure created by the U.S. Department of Transportation (USDOT) for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) systems, along with the new work related to the Connected Vehicle Safety Pilot program. As we move into a world where the intercommunciation of signals between cars and the roadway, and between cars, it is important that we understand if there are security problems, as with flick of a switch an attacker could cause mass chaos.

We really need to start to train software developers and embedded systems designers that they need to understand the vulnerabilities of their systems, and that times have changed in terms of just testing that something works or not!

Goodbye Big Blue and Hello To Big Data

Turning Full Circle (360 degrees)

With the sale of their low-end server business to Lenovo, IBM have completed their track from a company who led the industry for 70-odd years, and then produced the PC and became and became an also-ran. After a 35-year detour, they have now fully returned back to their roots: in producing high performance computers; leading on computer software integration; and with a focus on the client and their needs. The full circle is perhaps highlighted by one of the great computers of the past: IBM System 360, which led the industry for decades. For IBM the generic selling of computer boxes has never been seen as something that is likely to be a conduit for them to innovate and lead. For the creator of the ATM and the hard disk, IBM have thrown off the shackles of the last 40 years, and are back where they want to be.

The current trends within the computing industry are clear for all to see, as smaller hand-held units become the device of choice for consumers, and computing power is not being bundled into clusters, where resources can be shared across the cluster, and provided on an on-demand basis. The desktop and the low-end servers become just the end part of a large-scale infrastructure providing an elastic computing provision – which is basically a mainframe, which one which has lots of sub-computers, and which can be easily scale-up and ripped-down. The world has changed to since IBM created the PC, with its 4.7MHz clock, and 640kB of memory. Now we have hand-held devices running with four processing cores, and run more that 1,000 times faster, and more than 25,000 times the memory, than the original PC. In fact, it is not the processing power and memory capacity of a single device that is the key thing these days, it is the ability to add it into a cluster that provides the most interest for many companies. A failure in any part of a computer used to cause many problems, but with a clustered infrastructure the failure is not noticed by the users, as the data and processing power is mirrored across itself.

So we have moved on from the days of computer systems connecting to a network, and we now see many computers connected together to create a self healing infrastructure, and where it is fairly easily to add new cluster elements as the focus for building high-performance clusters, and which are used to process and analyse large data sets. It is this high-end market which IBM see as the future, and many companies too see their ability to succeed on the market place based on their ability to use data analytics to drive forward.

IBM have generally developed to have a broad range of products which span everything from application software to computer hardware. In the 1960s and 1970s, large computer companies, such as DEC, defined the standards involved in the industry, and the IBM PC changed this, and provided a platform for generalised hardware, which companies could quickly copy and define new industry standards. The definitions of a layered approach to networking, also allowed companies to specialise as horizontal integrators, and they were able to move faster and innovate than the vertical integrators.

Turning a Large Ship Around

Within the computing industry, companies have to spot opportunities, and make sure they move their product provision to take advantage of market changes. It is an industry in which leading companies can go from boom to bust in a short time. There are thus many examples of companies, at their peak, failing to spot changes in the market, including Compaq, Sun Microsystems and DEC, and who became fixated on a certain product range, and fail to see the evolution of the market.

Even Apple struggled for a time in the 1990s to find markets for its hardware, and strugged to move the industry away from Microsoft Windows and the IBM PC, and move the industry towards its own operating system and computer hardware. For them they struggled against the impact of the IBM PC, and, almost as a last resort, adopted the same hardware that was used in the IBM PC (as their previous computers used the IBM microprocessor which had a different way of running software than the Intel microprocessors used on the PC that IBM developed), and in integrating the Linux operating system. Both of these changes considerably reduced their investments in the unseen parts of a PC, and focused their concentration on the parts the user was most interested in: the usability of the computer. In 2009, Apple completed their transformation with MAC OS Snow Lopard which only supported  Intel-based architectures. Probably, apart from the IBM transformation, one of the smartest moves ever seen in the Computing industry. For a company such as IBM, who have based their product range on technical innovations, the route taken by Apple was not really one that IBM would have ever have felt comfortable about.

Cloud, Cloud and More Cloud

While many companies in the computing industry, especially ones focused on desktop systems, such as Dell, are trying to understand what their existing product range should be and where they need to development, IBM have provided one of the best examples of how a large corporation can lead within an industry sector, and then detect where their impact is failing, and go forward and transform themselves with a renewed focus.

For such as large organisation, IBM have managed to do this seamlessly, and where they have come out as one of the leaders of the pack in Cloud Computing and Big Data. For IBM, they made large-scale computers, and took a detour for 40 odd years, but they have now gone back to their roots. As one of the first companies to create main frame computers, and create one of the first programming languages – FORTRAN (created in 1957, and still used in the industry) – they are now back in the place they find most comfortable, and in supporting business sectors rather than computing needs.

Microsoft, Intel and Apple have successful managed to plot paths though rapid changes in the computing industry, and have kept themselves in business for over 40 years, and still innovating and leading the market in certain areas. While Apple and Intel have continued to invest in hardware development, IBM spotted a while back that the requirement for low-end computer hardware, especially for desktops, would offer very little in the way of long-term profitability. So, in 2005, they signalled the first move of getting out of low-level hardware by selling off their PC business in, and this is now complete with the sale of the low-level server business, both to Lenovo.

The computing market, which was based around desktop computers from the 1970s until recently, is now focusing on mobile devices, which do not use the architecture developed initially by IBM, and on high-end servers which run Cloud Computing infrastructures. The need for “bare-metal” servers, where one operating system runs on one machine, is reducing fast, as high-end ones now are capable of running many servers and hosts, at the same time. IBM has thus identified that it is the high-end market which will provide the future, especially in applying Big Data analysis to their range of services – and to become more service-oriented and develop in more profitable areas. These signs can also be seen within the IT security industry where the need for security products, such as firewalls, staying fairly static, but the demand for security consultancy services and support rapidly increases.

At one time, one operating system ran on one computer, as the hardware could only cope with this. Once the computing power existed within a single machine to run more than one operating system at a time, and still give acceptable performance, was the beginning of the end for the low-level server market.

Big Data

The requirement and market for hardware remains fairly static, but Cloud Computing and Big Data processing continues to expand fast, and it is one which highlights the increasing dependence that many market sectors have on the provision of Web services.

The amazing thing for IBM is that they have moved from a company which was built on defining hardware standards and controlling the industry, to one that is built on software and high-performance systems, and one that embraces open standards (especially for open source software). They have thus transformed their company from a hardware company to a software one, and who lead the world. It is still seen as one of the most innovative companies in the world (including five Nobel Prizes and numerous awards for scientific impact, including inventing the ATM, magnetic stripe cards, relational databases, floppy disks and hard disks), and one with a strong brand image.

Their renewed focus goes back to their roots of the 1950s, with their lead within mainframe computers, and it is now built around their advanced computing infrastructure. In the 1990s, IBM spotted that increasing power of computers with the defeat of Garry Kasparov by the IBM Deep Blue computer. While the real mastery was just the sheer power of searching through millions of possible moves, and finding the best, they continued with their focus onto defeating humans in the areas where they triumphed … understanding the English langauge. With this the IBM Watson managed to beat human opponents within Jeopardy!, and then managed to have a higher success rate in lung cancer diagnosis than leading cancer specialists. For the cancer diagnosis, Watson was sent back to medical school, and learnt how to spot lung cancer signs by analysing a whole range of unstructured data and using natural language processing.

Conclusions

IBM renewed focus on moving their business has been highlighted recently when the laid out their vision of the future, and for the first time none of these focused on hardware based systems – and with a focus on Cloud Computing and Big Data. These changes in the market space have also been spotted by many companies with large-scale investment in scaling business applications toward a Cloud infrastructure.

Companies in the past have depended on the computer infastrucfure, but increasing it is their cloud and data infastructure which is become their most important asset. The need for computing power increases by the day, and it is the abilty to bring together computers into a general resource which becomes the most important element, where memory, processing power and disk storage, can be seen as single resource pool, where at one time it was build around a distributing the computing power, which wasted a great deal of the resource. So, where are we now? We’re building the largest computer ever created – The Cloud. IBM have shown the have the vision to move towards this vision, and lead within Big Data, and which the new architectures of the future – in the same was that the Intel architecture built the computer industry – and one that could bring create benefits to every citizen, especially in evolving areas such as in health care and education.

The History of IBM

One of the first occurrences of computer technology occurred in the USA in the 1880s. It was due to the American Constitution demanding that a survey is undertaken every 10 years. As the population in the USA increased, it took an increasing amount of time to produce the statistics. By the 1880s, it looked likely that the 1880 survey would not be complete until 1890. To overcome this, Herman Hollerith (who worked for the Government) devised a ma-chine which accepted punch cards with information on them. These cards allowed a current to pass through a hole when there was a hole present. Hollerith’s electromechanical machine was extremely successful and used in the 1890 and 1900 Censuses. He even founded the company that would later become International Business Machines (IBM): CTR (Computer Tabulating Recording). Unfortunately, Hollerith’s business fell into financial difficulties and was saved by a young salesman at CTR, named Tom Watson, who recognized the potential of selling punch card-based calculating machines to American business. He eventually took over the company Watson, and, in the 1920s, he renamed it International Business Machines Corporation (IBM). After this, electromechanical machines were speeded up and improved. Electromechnical computers would soon lead to electronic computers, using valves.

After the creation of ENIAC, progress was fast in the computer industry and, by 1948, small electronic computers were being produced in quantity within five years (2000 were in use), in 1961 it was 10000, 1970 100000. IBM, at the time, had a considerable share of the computer market, so much so that a complaint was filed against them alleging monopolistic practices in its computer business, in violation of the Sherman Act. By January 1954, the US District Court made a final judgment on the complaint against IBM. For this, a ‘consent decree’ was then signed by IBM, which placed limitations on how IBM conducts business with respect to ‘electronic data processing machines’.

In 1954, the IBM 650 was built and was considered the workhorse of the industry at the time (which sold about 1000 machines, and used valves). In November 1956, IBM showed how innovative they were by developing the first hard disk, the RAMAC 305. It was towering by today’s standards, with 50 two-foot diameter platters, giving a total capacity of 5MB. Around the same time, the Massachusetts Institute of Technology produced the first transistorised computer: the TX-O (Transistorized Experimental computer). Seeing the potential of the transistor, IBM quickly switched from valves to transistors and, in 1959, they produced the first commercial transistorised computer. This was the IBM 7090/7094 series, and it dominated the computer market for years.

In 1959, IBM built the first commercial transistorised computer named the IBM 7090/7094 series, which dominated the computer market for many years. In 1960, in New York, IBM went on to develop the first automatic mass-production facility for transistors. In 1963, the Digital Equipment Company (DEC) sold their first minicomputer, to Atomic Energy of Canada. DEC would become the main competitor to IBM, but eventually fail as they dismissed the growth in the personal computer market.

The second generation of computers started in 1961 when the great innovator, Fairchild Semiconductor, released the first commercial integrated circuit. In the next two years, significant advances were made in the interfaces to computer systems. The first was by Teletype who produced the Model 33 keyboard and punched-tape terminal. It was a classic design and was on many of the available systems. The other advance was by Douglas Engelbart who received a patent for the mouse-pointing device for computers. The production of transistors increased, and each year brought a significant decrease in their size.

The third generation of computers started in 1965 with the use of integrated circuits ra-ther than discrete transistors. IBM again was innovative and created the System/360 main-frame. In the course of history, it was a true classic computer. Then, in 1970, IBM introduced the System/370, which included semiconductor memories. All of the computers were very expensive (approx. $1000000), and were the great computing workhorses of the time. Un-fortunately, they were extremely expensive to purchase and maintain. Most companies had to lease their computer systems, as they could not afford to purchase them. As IBM happily clung to their mainframe market, several new companies were working away to erode their share. DEC would be the first, with their minicomputer, but it would be the PC companies of the future who would finally overtake them. The beginning of their loss of market share can be traced to the development of the microprocessor, and to one company: Intel. In 1967, though, IBM again showed their leadership in the computer industry by developing the first floppy disk. The growing electronics industry started to entice new companies to specialize in key areas, such as International Research who applied for a patent for a method of constructing double-sided magnetic tape utilizing a Mumetal foil inter layer.

The beginning of the slide for IBM occurred in 1968, when Robert Noyce and Gordon Moore left Fairchild Semiconductors and met up with Andy Grove to found Intel Corporation. To raise the required finance they went to a venture capitalist named Arthur Rock. He quickly found the required start-up finance, as Robert Noyce was well known for being the person who first put more than one transistor of a piece of silicon. At the same time, IBM scientist John Cocke and others completed a prototype scientific computer called the ACS, which used some RISC (Reduced Instruction Set Computer) concepts. Unfortunately, the project was cancelled because it was not compatible with the IBM’s System/360 computers.

In 1969, Hewlett-Packard branched into the world of digital electronics with the world’s first desktop scientific calculator: the HP 9100A. At the time, the electronics industry was producing cheap pocket calculators, which led to the development of affordable computers, when the Japanese company Busicom commissioned Intel to produce a set of between eight and 12 ICs for a calculator. Then instead of designing a complete set of ICs, Ted Hoff, at Intel, designed an integrated circuit chip that could receive instructions, and perform simple integrated functions on data. The design became the 4004 microprocessor. Intel produced a set of ICs, which could be programmed to perform different tasks. These were the first ever microprocessors and soon Intel (short for Integrated Electronics) produced a general-purpose 4-bit microprocessor, named the 4004. In April 1970, Wayne Pickette proposed to Intel that they use the computer-on-a-chip for the Busicom project. Then, in December, Gilbert Hyatt filed a patent application entitled ‘Single Chip Integrated Circuit Computer Architecture’, the first basic patent on the micro-processor.

The 4004 caused a revolution in the electronics industry as previous electronic systems had a fixed functionality. With this processor, the functionality could be programmed by software. Amazingly, by today’s standards, it could only handle four bits of data at a time (a nibble), contained 2000 transistors, had 46 instructions and allowed 4KB of program code and 1KB of data. From this humble start, the PC has since evolved using Intel microprocessors. Intel had previously been an innovative company, and had produced the first memory device (static RAM, which uses six transistors for each bit stored in memory), the first DRAM (dynamic memory, which uses only one transistor for each bit stored in memory) and the first EPROM (which allows data to be downloaded to a device, which is then permanent-ly stored).

In the same year, Intel announced the 1KB RAM chip, which was a significant increase over previously produced memory chip. Around the same time, one of Intel’s major partners, and also, as history has shown, competitors, Advanced Micro Devices (AMD) Incorporated was founded. It was started when Jerry Sanders and seven others left – yes, you’ve guessed it, Fairchild Semiconductor. The incubator for the electronics industry was producing many spin-off companies.

At the same time, the Xerox Corporation gathered a team at the Palo Alto Research Center (PARC) and gave them the objective of creating ‘the architecture of information.’ It would lead to many of the great developments of computing, including personal distributed computing, graphical user interfaces, the first commercial mouse, bit-mapped displays, Ethernet, client/server architecture, object-oriented programming, laser printing and many of the basic protocols of the Internet. Few research centers have ever been as creative, and forward thinking as PARC was over those years.

In 1971, Gary Boone, of Texas Instruments, filed a patent application relating to a single-chip computer and the microprocessor was released in November. Also in the same year, Intel copied the 4004 microprocessor to Busicom, and then in 1974, Intel was a truly innovative company, and was the first to develop an 8-bit microprocessor. Excited by the new 8-bit microprocessors, two kids from a private high school, Bill Gates and Paul Allen, rushed out to buy the new 8008 device. This they believed would be the beginning of the end of the large, and expensive, mainframes (such as the IBM range) and minicomputers (such as the DEC PDP range). They bought the processors for the high price of $360 (possibly, a joke at the expense of the IBM System/360 mainframe), but even they could not make it support BASIC programming. Instead, they formed the Traf-O-Data company and used the 8008 to analyse tickertape read-outs of cars passing in a street. The company would close down in the following year (1973) after it had made $20000, but from this enterprising start, one of the leading computer companies in the world would grow: Microsoft (although it would initially be called Micro-soft).

At the end of the 1970s, IBM’s virtual monopoly on computer systems started to erode from the high-powered end as DEC developed their range of minicomputers and from the low-powered-end by companies developing computers based around the newly available 8-bit micro­processors, such as the 6502 and the Z80. IBM’s main contenders, other than DEC, were Apple and Commodore who introduced a new type of computer – the personal computer (PC). The leading systems, at the time, were the Apple I and the Commodore PET. These captured the interest of the home user and for the first time individuals had access to cheap computing power. These flagship computers spawned many others, such as the Sinclair ZX80/ZX81, the BBC microcomputer, the Sinclair Spectrum, the Commodore Vic-20 and the classic Apple II (all of which where based on the 6502 or Z80). Most of these computers were aimed at the lower end of the market and were mainly used for playing games and not for business applications. IBM finally decided, with the advice of Bill Gates, to use the 8088 for its version of the PC, and not, as they had first thought, to use the 8080 device. Microsoft also persuaded IBM to introduce the IBM PC with a minimum of 64KB RAM, instead of the 16KB that IBM planned.

In 1973, the model for future computer systems occurred at Xerox’s PARC, when the Alto workstation was demonstrated with a bit mapped screen (showing the Cookie Monster, from Sesame Street). The following year, at Xerox, Bob Metcalfe demonstrated the Ethernet networking technology, which was destined to become the standard local area networking technique. It was far from perfect, as computers contended with each other for access to the network, but it was cheap and simple, and it worked relatively well.

IBM was also innovating at the time, creating a cheap floppy disk drive. They also produced the IBM 3340 hard disk unit (a Winchester disk) which had a recording head which sat on a cushion of air, 18 millionths of an inch above the platter. The disk was made with four platters, each was 8-inches in diameter, giving a total capacity of 70MB.

The days of IBM leading the field very quickly became numbered as Compaq managed to reverse engineering the software which allowed the operating system to talk to the hardware – BIOS. Once they did this IBM struggled to set standards in the industry, and had several attempts to define new operating systems such as OS/2 and in defining new computer architectures, with MCA bus standard. The industry decided that common standards were more important than ones defined by a single company.