Rise of Hacktivism: Attacks against Virtual Infastructure is increasely a Tool-of-choice for Protesters

Introduction

In an era with an always-on connectivity, protesters can be a strong statement against an organisation by bringing down its information infrastructure. It is something that can make front page news stories, and becomes the equivalent of protesting from a-far, with very little chance of being traced.

So, as a protest against St. Louis County Police’s involvement in the killing of unarmed teenager Michael Brown in Ferguson, Mo, there was a DDoS (Distributed Denial of Service) attack on the police Web site, and which brought down the attack for several days. Overall it made a strong statement, and which the authorities could do little about it. Along with this, the group responsible, who declared links to Anonymous, outlined that they had hacked into the St. Louis County Police network, and gained access to dispatch tapes related to the day of the shooting, which they then uploaded to YouTube.

Why is DDoS so successful?

This year (2014) has actually seen more DDoS attacks than ever before, with a doubling of the high-end attacks over the year, and with over 100 attacks peaking at more than 100Gbps. The current highest attack was against a Spanish site, where the NTP (Network Time Protocol) was used to bombard the Web infrastructure. With this the intruder makes requests from compromised hosts to a NTP server for the current time, but uses the destination target as the return address for the request. Overall the protocols used on the Internet are not designed with security in mind, thus it is possible to use a different destination address to the one that actually made the request. This specific attack peaked at 154.69Gbps, which is more than enough to bring any network down. The key target is to exhaust networked resources, such as the interconnected devices, the bandwidth for the connections to the Internet, and the CPU of the servers.

The reason that DDoS is often successful is three-fold:

  • Difficult to differentiate between good and bad traffic. Overall the Internet has been created by some extremely simple protocols, which were not designed with security in-mind. Thus it is extremely difficult to differentiate good traffic from bad traffic. Normally organisations throttle back when they are under attack, by not accepting new connections, and waiting to the existing connections have been broken.
  • Tracks are obfuscated. With reflect attack, the target becomes an intermediate device, where it is difficult to trace the actual source of the attack. With networks such as Tor, the intruder can further hide their tracks.
  • Zombie nodes used in the attack. There are many compromised hosts on the Internet, including those compromised with the Zeus botnet. Each of these can be controlled, and used to attack the target.

The Rise of Hacktivism

As we have seen in Russia’s suspected cyber attack on Web sites in Estonia, and in the Arab Spring uprising, the Internet is playing an increasing part within conflicts around the World. Thus as we move into an Information Age, the battle field of the future is likely to be in Cyber Space, along with this it will also be the place where nation states will struggle to control news outlets.

A cause or a fight?

Organisations need to understand that there are new risks within the Information Age and there are new ways to distribute messages, especially from those who are skillful enough to be able to disrupt traditional forms for dissemination. Thus Hacktivism can become a threat to any nation state and organisation (Figure 1).

Slide3Figure 1: Security is not just technical, it is also Political, Economic, and Social

The important thing to note about Hacktivism is that the viewpoint on the Hacktivist will often be reflected on the political landscape of the current time, and that time itself can change this viewpoint. While Adolf Hitler and Benito Mussolini are still rightly seen as terror agents, Martin Luther King and Mahatma Gandhi are now seen as freedom fighters. Thus viewpoints often change and for some the Hacktivist can have the image of a freedom fighter.

Slide6Figure 2: Hacktivism

Big v Little

The Internet supports a voice for all, and there are many cases of organisations and national states upsetting groups around the World, and where they have successful rebelled against them. In 2012, Tunisian Government web sites were attacked because of WikiLeaks censorship, and in 2011, the Sony PlayStation Network was hacked after Sony said they would name and shame the person responsible for jail breaking their consoles (Figure 3). It can be seen that just because you are small on the Internet, doesn’t mean you cannot have a massive impact. Sony ended up losing billions on their share price, and lost a great deal of customer confidence.

Slide7Figure 3: Hacktivism examples

HBGary Federal

The HBGary Federal example is the best one in terms of how organisations need to understand their threat landscape. For this Aaron Barr, the CEO of HBGary, announced that they would unmask some of the key people involved in Anonymous, and contacted a host of agencies, including the NSA and Interpol. Anonymous bounced a message back saying that they shouldn’t do this, as they would go after them. As HBGary were a leading security organisation, they thought they could cope with this and went ahead with their threat.

Anonymous then searched around on the HBGary CMS system, and found that a simple PHP request of:

http://www.hbgaryfederal.com/pages.php?pageNav=2&page=27

give them access to the complete database of usernames and hashed passwords for their site. As the passwords were not salted, it was an easy task to reverse engineer the hashes back to the original password. Their target, though, was Aaron Barr and Ted Vera (COO), each of which used weak passwords of six characters and two numbers, which are easily broken.

Now they had their login details, Anonymous moved onto other targets. Surely they wouldn’t have used the same password for their other accounts? But when they tried, the can get access to a while range of their accounts using the same password (including Twitter and Gmail). This allowed Anonymous access to GBs of R&D information. Then the noticed that the System Administrator for their Gmail Email account as Aaron, and managed to gain access to their complete email system, and which included the email system for the Dutch Police.

Slide9Figure 4: Access to email and a whole lot more.

Finally they went after their top security expert: Greg Hoglund, who owned HBGary. For this they send him an email, from within the Gmail account, from a system administrator, and asking for confirmation on a key system password, of which Greg replied back with it. Anonymous then went onto compromise his accounts, and which is a lesson for many organisations. While HBGary Federal has since been closed down, due to the adverse publicity around the hack, the partner company (HBGary) has went from strength-to-strength, with Greg making visionary presentations on computer security around the World.Slide10Figure 5: Greg’s compromise.

Conclusions

A key factor is in these types of attacks, is that, when not prepared, the complete infastructure can fall like a house of cards. In Ferguson, the email system also went off-line for a while, and to protect themselves from data leakage, they took down all personal information their site.

The protection of IT infrastructures against DDoS is extremely difficult, and organisations need to understand how they will cope with these types of attacks. Along with this, many organisations are even more proactive, and actively listen to the “buzz” around hacking events on the Internet, in order to put in-place mitigation methods. Often it’s a matter of coping with the attack, and enabling new network routes and virtualised devices to cope with the attack while it happens.

Overall it is a difficult debate, and one person’s cause is another fight, but the technological challenge remains, and it is one of the most difficult faced by IT architectures, and is often costly to deal with.

Hacking Traffic Lights and the Internet of Things – We Should All Beware of Bad Security!

tr1Introduction

As we move into an Information Age we becoming increasing dependent on data for the control of our infrastructures, which leaves them open to attackers. Often critical infrastructure is obvious, such as the energy supplies for data centers, but it is often the ones which are the least obvious that are the most open to attack. This could be for an air conditioning system in a data centre, where a failure can cause the equipment to virtually melt (especially tape drives) or in the control of traffic around a city. As we move towards using data to control and optimize our lives we become more dependence on it. Normally in safety critical systems there is a failsafe control mechanism, which is an out-of-band control system which makes sure that the system does not operate outside its safe working. In a control plant, this might be a vibration sense on a pump, where, if it is ran too fast, it will be detected, and the control system will place the overall system into a safe mode. For traffic lights there is normally a vision capture of the state of the lights, and this is fed back to a failsafe system, that is able to detect when the lights are incorrect. if someone gets access to the failsafe system, the can thus overrule safety, and compromise the system. This article outlines a case where this occured, and some of the lessons that can be learnt from it.

Traffic Light Hacking

So, to prove a point, security researchers, lead by Alex Halderman at the University of Michigan, managed to use a laptop and an off-the-shelf radio transmitter to control traffic light signals (https://jhalderm.com/pub/papers/traffic-woot14.pdf). Overall they found many security vulnerabilities and managed to control over 100 traffic signals within Michigan City using a single laptop. In order to be ethical in their approach the gained full permission form the road agency, and made sure that there was no danger to drivers. Their sole motivation was to show that traffic control infrastructure could be easily taken over.

Overall they found a weak implementation of security with the usage of open and unencrypted radio signals, which allowed intruders to tap into their communications, and then discovered the usage of factory-default usernames and passwords. Along with this there was a debugging port which could be easily compromised.

In the US, the radio frequency used to control traffic lights is typically in the ISM band at 900 MHz or 5.8 GHz, which makes it fairly easy to get equipment to communicate with the radio system. The researchers used readily available wireless equipment and single laptop to read the unencrypted data on the wireless network.

Figure 1 provides an overview ofthe control system where the radio transmitter provides a live feed (and other sensed information) to the road agency.The induction unit is normally bured in each of the junctions, and detect cars as the pass over it, and the camera is used to watch the traffic lights, and feed the colours of the lights back to the controller. In this way there is a visual failsafe.

Overriding the failsafe

The MMU (Malfunction Management Unit) is the failsafe operator on the system and ensures that the lights are not put into an unsafe state (such as for Red and Green at the same time), and the lights are then adjusted using the information gained from the induction loops in the road (and which senses cars as they pass over it). If control can be gained to the MMU, and allow for access to the controller, the lights can be compromised to go into incorrect states, or to stay at steady red (and cause a grid lock within a city). Within the MMU controller board, the researchers found that by connecting a jumper wire, the output from the controller was ignored, and the intersection put into a known-safe state.

trafnewFigure 1: Overview of traffic control system

Same old debug port

A typical security problem in many control systems is that there is often a debug port, which gives highly priviledged access to the system. Within this compromise, the researchers found that the control boxes ran VxWorks 5.5, which leaves a debug port open for testing. They then sniffed the packages between the controller and the MMU, and found that there was no authentication used, and that the messages were not encrypted and can be easily viewed and replayed. This allowed them to reverse engineer the messaging protocol for the lights. They then created a program to activate any of the buttons witin the controller and display the results, and then even to access the controller remotely. In the end they managed to turn all the lights in the neighournood to red (or all green on a given route – in order to operate safely within the experiment).

DDoS

Finally they found that the units were suspectiable to a denial-of-service (DoS) attack, where continual accesses with incorrect control signals over the network, caused the malfunction management unit to put the lights in a failure state (all red). In this way the system failed to cope with excessive traffic, and all the units would end up failing with this type of probe.

 

Conclusions

This vulnerability showed all the standard signs of the bad integration of security, and which is common in many systems, where security is not thought of as a major concern. This is not a small scale issue, as the researchers identified that this type of system is used in more than 60% of the traffic intersections in the US. If a malicious agent wanted to bring a city, or even a country to its knees, they could just flip a switch … and there is no road transport system, which can then cause chaos to the rest of the infrastructure. We really need to think the way that systems are designed, and probe them for their vulnerabilties.

The researchers in this study have already got other easy targets in their sight such as tapping into the public messaging systems on freeways, and into the infastructure created by the U.S. Department of Transportation (USDOT) for vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) systems, along with the new work related to the Connected Vehicle Safety Pilot program. As we move into a world where the intercommunciation of signals between cars and the roadway, and between cars, it is important that we understand if there are security problems, as with flick of a switch an attacker could cause mass chaos.

We really need to start to train software developers and embedded systems designers that they need to understand the vulnerabilities of their systems, and that times have changed in terms of just testing that something works or not!

Goodbye Big Blue and Hello To Big Data

Turning Full Circle (360 degrees)

With the sale of their low-end server business to Lenovo, IBM have completed their track from a company who led the industry for 70-odd years, and then produced the PC and became and became an also-ran. After a 35-year detour, they have now fully returned back to their roots: in producing high performance computers; leading on computer software integration; and with a focus on the client and their needs. The full circle is perhaps highlighted by one of the great computers of the past: IBM System 360, which led the industry for decades. For IBM the generic selling of computer boxes has never been seen as something that is likely to be a conduit for them to innovate and lead. For the creator of the ATM and the hard disk, IBM have thrown off the shackles of the last 40 years, and are back where they want to be.

The current trends within the computing industry are clear for all to see, as smaller hand-held units become the device of choice for consumers, and computing power is not being bundled into clusters, where resources can be shared across the cluster, and provided on an on-demand basis. The desktop and the low-end servers become just the end part of a large-scale infrastructure providing an elastic computing provision – which is basically a mainframe, which one which has lots of sub-computers, and which can be easily scale-up and ripped-down. The world has changed to since IBM created the PC, with its 4.7MHz clock, and 640kB of memory. Now we have hand-held devices running with four processing cores, and run more that 1,000 times faster, and more than 25,000 times the memory, than the original PC. In fact, it is not the processing power and memory capacity of a single device that is the key thing these days, it is the ability to add it into a cluster that provides the most interest for many companies. A failure in any part of a computer used to cause many problems, but with a clustered infrastructure the failure is not noticed by the users, as the data and processing power is mirrored across itself.

So we have moved on from the days of computer systems connecting to a network, and we now see many computers connected together to create a self healing infrastructure, and where it is fairly easily to add new cluster elements as the focus for building high-performance clusters, and which are used to process and analyse large data sets. It is this high-end market which IBM see as the future, and many companies too see their ability to succeed on the market place based on their ability to use data analytics to drive forward.

IBM have generally developed to have a broad range of products which span everything from application software to computer hardware. In the 1960s and 1970s, large computer companies, such as DEC, defined the standards involved in the industry, and the IBM PC changed this, and provided a platform for generalised hardware, which companies could quickly copy and define new industry standards. The definitions of a layered approach to networking, also allowed companies to specialise as horizontal integrators, and they were able to move faster and innovate than the vertical integrators.

Turning a Large Ship Around

Within the computing industry, companies have to spot opportunities, and make sure they move their product provision to take advantage of market changes. It is an industry in which leading companies can go from boom to bust in a short time. There are thus many examples of companies, at their peak, failing to spot changes in the market, including Compaq, Sun Microsystems and DEC, and who became fixated on a certain product range, and fail to see the evolution of the market.

Even Apple struggled for a time in the 1990s to find markets for its hardware, and strugged to move the industry away from Microsoft Windows and the IBM PC, and move the industry towards its own operating system and computer hardware. For them they struggled against the impact of the IBM PC, and, almost as a last resort, adopted the same hardware that was used in the IBM PC (as their previous computers used the IBM microprocessor which had a different way of running software than the Intel microprocessors used on the PC that IBM developed), and in integrating the Linux operating system. Both of these changes considerably reduced their investments in the unseen parts of a PC, and focused their concentration on the parts the user was most interested in: the usability of the computer. In 2009, Apple completed their transformation with MAC OS Snow Lopard which only supported  Intel-based architectures. Probably, apart from the IBM transformation, one of the smartest moves ever seen in the Computing industry. For a company such as IBM, who have based their product range on technical innovations, the route taken by Apple was not really one that IBM would have ever have felt comfortable about.

Cloud, Cloud and More Cloud

While many companies in the computing industry, especially ones focused on desktop systems, such as Dell, are trying to understand what their existing product range should be and where they need to development, IBM have provided one of the best examples of how a large corporation can lead within an industry sector, and then detect where their impact is failing, and go forward and transform themselves with a renewed focus.

For such as large organisation, IBM have managed to do this seamlessly, and where they have come out as one of the leaders of the pack in Cloud Computing and Big Data. For IBM, they made large-scale computers, and took a detour for 40 odd years, but they have now gone back to their roots. As one of the first companies to create main frame computers, and create one of the first programming languages – FORTRAN (created in 1957, and still used in the industry) – they are now back in the place they find most comfortable, and in supporting business sectors rather than computing needs.

Microsoft, Intel and Apple have successful managed to plot paths though rapid changes in the computing industry, and have kept themselves in business for over 40 years, and still innovating and leading the market in certain areas. While Apple and Intel have continued to invest in hardware development, IBM spotted a while back that the requirement for low-end computer hardware, especially for desktops, would offer very little in the way of long-term profitability. So, in 2005, they signalled the first move of getting out of low-level hardware by selling off their PC business in, and this is now complete with the sale of the low-level server business, both to Lenovo.

The computing market, which was based around desktop computers from the 1970s until recently, is now focusing on mobile devices, which do not use the architecture developed initially by IBM, and on high-end servers which run Cloud Computing infrastructures. The need for “bare-metal” servers, where one operating system runs on one machine, is reducing fast, as high-end ones now are capable of running many servers and hosts, at the same time. IBM has thus identified that it is the high-end market which will provide the future, especially in applying Big Data analysis to their range of services – and to become more service-oriented and develop in more profitable areas. These signs can also be seen within the IT security industry where the need for security products, such as firewalls, staying fairly static, but the demand for security consultancy services and support rapidly increases.

At one time, one operating system ran on one computer, as the hardware could only cope with this. Once the computing power existed within a single machine to run more than one operating system at a time, and still give acceptable performance, was the beginning of the end for the low-level server market.

Big Data

The requirement and market for hardware remains fairly static, but Cloud Computing and Big Data processing continues to expand fast, and it is one which highlights the increasing dependence that many market sectors have on the provision of Web services.

The amazing thing for IBM is that they have moved from a company which was built on defining hardware standards and controlling the industry, to one that is built on software and high-performance systems, and one that embraces open standards (especially for open source software). They have thus transformed their company from a hardware company to a software one, and who lead the world. It is still seen as one of the most innovative companies in the world (including five Nobel Prizes and numerous awards for scientific impact, including inventing the ATM, magnetic stripe cards, relational databases, floppy disks and hard disks), and one with a strong brand image.

Their renewed focus goes back to their roots of the 1950s, with their lead within mainframe computers, and it is now built around their advanced computing infrastructure. In the 1990s, IBM spotted that increasing power of computers with the defeat of Garry Kasparov by the IBM Deep Blue computer. While the real mastery was just the sheer power of searching through millions of possible moves, and finding the best, they continued with their focus onto defeating humans in the areas where they triumphed … understanding the English langauge. With this the IBM Watson managed to beat human opponents within Jeopardy!, and then managed to have a higher success rate in lung cancer diagnosis than leading cancer specialists. For the cancer diagnosis, Watson was sent back to medical school, and learnt how to spot lung cancer signs by analysing a whole range of unstructured data and using natural language processing.

Conclusions

IBM renewed focus on moving their business has been highlighted recently when the laid out their vision of the future, and for the first time none of these focused on hardware based systems – and with a focus on Cloud Computing and Big Data. These changes in the market space have also been spotted by many companies with large-scale investment in scaling business applications toward a Cloud infrastructure.

Companies in the past have depended on the computer infastrucfure, but increasing it is their cloud and data infastructure which is become their most important asset. The need for computing power increases by the day, and it is the abilty to bring together computers into a general resource which becomes the most important element, where memory, processing power and disk storage, can be seen as single resource pool, where at one time it was build around a distributing the computing power, which wasted a great deal of the resource. So, where are we now? We’re building the largest computer ever created – The Cloud. IBM have shown the have the vision to move towards this vision, and lead within Big Data, and which the new architectures of the future – in the same was that the Intel architecture built the computer industry – and one that could bring create benefits to every citizen, especially in evolving areas such as in health care and education.

The History of IBM

One of the first occurrences of computer technology occurred in the USA in the 1880s. It was due to the American Constitution demanding that a survey is undertaken every 10 years. As the population in the USA increased, it took an increasing amount of time to produce the statistics. By the 1880s, it looked likely that the 1880 survey would not be complete until 1890. To overcome this, Herman Hollerith (who worked for the Government) devised a ma-chine which accepted punch cards with information on them. These cards allowed a current to pass through a hole when there was a hole present. Hollerith’s electromechanical machine was extremely successful and used in the 1890 and 1900 Censuses. He even founded the company that would later become International Business Machines (IBM): CTR (Computer Tabulating Recording). Unfortunately, Hollerith’s business fell into financial difficulties and was saved by a young salesman at CTR, named Tom Watson, who recognized the potential of selling punch card-based calculating machines to American business. He eventually took over the company Watson, and, in the 1920s, he renamed it International Business Machines Corporation (IBM). After this, electromechanical machines were speeded up and improved. Electromechnical computers would soon lead to electronic computers, using valves.

After the creation of ENIAC, progress was fast in the computer industry and, by 1948, small electronic computers were being produced in quantity within five years (2000 were in use), in 1961 it was 10000, 1970 100000. IBM, at the time, had a considerable share of the computer market, so much so that a complaint was filed against them alleging monopolistic practices in its computer business, in violation of the Sherman Act. By January 1954, the US District Court made a final judgment on the complaint against IBM. For this, a ‘consent decree’ was then signed by IBM, which placed limitations on how IBM conducts business with respect to ‘electronic data processing machines’.

In 1954, the IBM 650 was built and was considered the workhorse of the industry at the time (which sold about 1000 machines, and used valves). In November 1956, IBM showed how innovative they were by developing the first hard disk, the RAMAC 305. It was towering by today’s standards, with 50 two-foot diameter platters, giving a total capacity of 5MB. Around the same time, the Massachusetts Institute of Technology produced the first transistorised computer: the TX-O (Transistorized Experimental computer). Seeing the potential of the transistor, IBM quickly switched from valves to transistors and, in 1959, they produced the first commercial transistorised computer. This was the IBM 7090/7094 series, and it dominated the computer market for years.

In 1959, IBM built the first commercial transistorised computer named the IBM 7090/7094 series, which dominated the computer market for many years. In 1960, in New York, IBM went on to develop the first automatic mass-production facility for transistors. In 1963, the Digital Equipment Company (DEC) sold their first minicomputer, to Atomic Energy of Canada. DEC would become the main competitor to IBM, but eventually fail as they dismissed the growth in the personal computer market.

The second generation of computers started in 1961 when the great innovator, Fairchild Semiconductor, released the first commercial integrated circuit. In the next two years, significant advances were made in the interfaces to computer systems. The first was by Teletype who produced the Model 33 keyboard and punched-tape terminal. It was a classic design and was on many of the available systems. The other advance was by Douglas Engelbart who received a patent for the mouse-pointing device for computers. The production of transistors increased, and each year brought a significant decrease in their size.

The third generation of computers started in 1965 with the use of integrated circuits ra-ther than discrete transistors. IBM again was innovative and created the System/360 main-frame. In the course of history, it was a true classic computer. Then, in 1970, IBM introduced the System/370, which included semiconductor memories. All of the computers were very expensive (approx. $1000000), and were the great computing workhorses of the time. Un-fortunately, they were extremely expensive to purchase and maintain. Most companies had to lease their computer systems, as they could not afford to purchase them. As IBM happily clung to their mainframe market, several new companies were working away to erode their share. DEC would be the first, with their minicomputer, but it would be the PC companies of the future who would finally overtake them. The beginning of their loss of market share can be traced to the development of the microprocessor, and to one company: Intel. In 1967, though, IBM again showed their leadership in the computer industry by developing the first floppy disk. The growing electronics industry started to entice new companies to specialize in key areas, such as International Research who applied for a patent for a method of constructing double-sided magnetic tape utilizing a Mumetal foil inter layer.

The beginning of the slide for IBM occurred in 1968, when Robert Noyce and Gordon Moore left Fairchild Semiconductors and met up with Andy Grove to found Intel Corporation. To raise the required finance they went to a venture capitalist named Arthur Rock. He quickly found the required start-up finance, as Robert Noyce was well known for being the person who first put more than one transistor of a piece of silicon. At the same time, IBM scientist John Cocke and others completed a prototype scientific computer called the ACS, which used some RISC (Reduced Instruction Set Computer) concepts. Unfortunately, the project was cancelled because it was not compatible with the IBM’s System/360 computers.

In 1969, Hewlett-Packard branched into the world of digital electronics with the world’s first desktop scientific calculator: the HP 9100A. At the time, the electronics industry was producing cheap pocket calculators, which led to the development of affordable computers, when the Japanese company Busicom commissioned Intel to produce a set of between eight and 12 ICs for a calculator. Then instead of designing a complete set of ICs, Ted Hoff, at Intel, designed an integrated circuit chip that could receive instructions, and perform simple integrated functions on data. The design became the 4004 microprocessor. Intel produced a set of ICs, which could be programmed to perform different tasks. These were the first ever microprocessors and soon Intel (short for Integrated Electronics) produced a general-purpose 4-bit microprocessor, named the 4004. In April 1970, Wayne Pickette proposed to Intel that they use the computer-on-a-chip for the Busicom project. Then, in December, Gilbert Hyatt filed a patent application entitled ‘Single Chip Integrated Circuit Computer Architecture’, the first basic patent on the micro-processor.

The 4004 caused a revolution in the electronics industry as previous electronic systems had a fixed functionality. With this processor, the functionality could be programmed by software. Amazingly, by today’s standards, it could only handle four bits of data at a time (a nibble), contained 2000 transistors, had 46 instructions and allowed 4KB of program code and 1KB of data. From this humble start, the PC has since evolved using Intel microprocessors. Intel had previously been an innovative company, and had produced the first memory device (static RAM, which uses six transistors for each bit stored in memory), the first DRAM (dynamic memory, which uses only one transistor for each bit stored in memory) and the first EPROM (which allows data to be downloaded to a device, which is then permanent-ly stored).

In the same year, Intel announced the 1KB RAM chip, which was a significant increase over previously produced memory chip. Around the same time, one of Intel’s major partners, and also, as history has shown, competitors, Advanced Micro Devices (AMD) Incorporated was founded. It was started when Jerry Sanders and seven others left – yes, you’ve guessed it, Fairchild Semiconductor. The incubator for the electronics industry was producing many spin-off companies.

At the same time, the Xerox Corporation gathered a team at the Palo Alto Research Center (PARC) and gave them the objective of creating ‘the architecture of information.’ It would lead to many of the great developments of computing, including personal distributed computing, graphical user interfaces, the first commercial mouse, bit-mapped displays, Ethernet, client/server architecture, object-oriented programming, laser printing and many of the basic protocols of the Internet. Few research centers have ever been as creative, and forward thinking as PARC was over those years.

In 1971, Gary Boone, of Texas Instruments, filed a patent application relating to a single-chip computer and the microprocessor was released in November. Also in the same year, Intel copied the 4004 microprocessor to Busicom, and then in 1974, Intel was a truly innovative company, and was the first to develop an 8-bit microprocessor. Excited by the new 8-bit microprocessors, two kids from a private high school, Bill Gates and Paul Allen, rushed out to buy the new 8008 device. This they believed would be the beginning of the end of the large, and expensive, mainframes (such as the IBM range) and minicomputers (such as the DEC PDP range). They bought the processors for the high price of $360 (possibly, a joke at the expense of the IBM System/360 mainframe), but even they could not make it support BASIC programming. Instead, they formed the Traf-O-Data company and used the 8008 to analyse tickertape read-outs of cars passing in a street. The company would close down in the following year (1973) after it had made $20000, but from this enterprising start, one of the leading computer companies in the world would grow: Microsoft (although it would initially be called Micro-soft).

At the end of the 1970s, IBM’s virtual monopoly on computer systems started to erode from the high-powered end as DEC developed their range of minicomputers and from the low-powered-end by companies developing computers based around the newly available 8-bit micro­processors, such as the 6502 and the Z80. IBM’s main contenders, other than DEC, were Apple and Commodore who introduced a new type of computer – the personal computer (PC). The leading systems, at the time, were the Apple I and the Commodore PET. These captured the interest of the home user and for the first time individuals had access to cheap computing power. These flagship computers spawned many others, such as the Sinclair ZX80/ZX81, the BBC microcomputer, the Sinclair Spectrum, the Commodore Vic-20 and the classic Apple II (all of which where based on the 6502 or Z80). Most of these computers were aimed at the lower end of the market and were mainly used for playing games and not for business applications. IBM finally decided, with the advice of Bill Gates, to use the 8088 for its version of the PC, and not, as they had first thought, to use the 8080 device. Microsoft also persuaded IBM to introduce the IBM PC with a minimum of 64KB RAM, instead of the 16KB that IBM planned.

In 1973, the model for future computer systems occurred at Xerox’s PARC, when the Alto workstation was demonstrated with a bit mapped screen (showing the Cookie Monster, from Sesame Street). The following year, at Xerox, Bob Metcalfe demonstrated the Ethernet networking technology, which was destined to become the standard local area networking technique. It was far from perfect, as computers contended with each other for access to the network, but it was cheap and simple, and it worked relatively well.

IBM was also innovating at the time, creating a cheap floppy disk drive. They also produced the IBM 3340 hard disk unit (a Winchester disk) which had a recording head which sat on a cushion of air, 18 millionths of an inch above the platter. The disk was made with four platters, each was 8-inches in diameter, giving a total capacity of 70MB.

The days of IBM leading the field very quickly became numbered as Compaq managed to reverse engineering the software which allowed the operating system to talk to the hardware – BIOS. Once they did this IBM struggled to set standards in the industry, and had several attempts to define new operating systems such as OS/2 and in defining new computer architectures, with MCA bus standard. The industry decided that common standards were more important than ones defined by a single company.

Big Data is Your Data – Your Helper/Spy in the Cloud

Introduction

Over the past few weeks, I’ve observed that whenever I search for things on the Internet, there’s a whole lots of advertisements which focus on the thing I’ve search for, and on a range of site. This seems quite worrying, as your profile and search information is normally constrained within a certain site, and you develop a trust relationship with them. So forget the NSA and the threats around national intelligence agencies spying on your network traces, the real focus for snooping on our lives are from those trying to understand who we live, and have created a whole business model around mining data around the user. So it’s companies such as LinkedIn, Amazon, Facebook and Google that are the ones who try and analyse the things we like, and what we buy, and it’s the payback that we must make in order to get their “free” services. So it is “Online Behavioral Advertisements” such as AdChoices, which is the most that is the most extremely version of using your profile and Web searches to push content to you on the Web sites that we connect to. The quote from the advertising material says:

Better ads and offers. With interest-based advertising, you get ads that are more interesting, 
relevant, and useful to you. Those relevant ads improve the online experience and help users 
find the things that interest them more easily.

But there’s a whole debate in here about the privacy of our profiles, and our ability to hide our tracks when we want. The statement is a dream for marketing departments, but is a Big Brother scenario for others. If you were asked whether you wanted this targeting, then there’s no argument, if not, there’s a problem here.

If you want to find you who is watching you and customising adverts for you, go to: http://www.aboutads.info/choices/. I quick check shows that I have 88 companies who are watching aiming their target on me. The privacy hints to the ability to gather a whole range of information on the user:

-Personal information you knowingly choose to disclose to us such as your name, mailing 
address, and email address. You may provide this information when you make requests for 
information or assistance.
-Non-personal information including but not limited to browser type, IP address, operating 
system, the date and time of a visit, the pages visited on this Site, the time spent viewing 
the Site, and return visits to the Site.
- We may also collect aggregated information as you and others browse our Site.

The marketing person’s dream

Overall companies are getting in understanding how we behave and the targeting more focused, as advertises want to target their products exactly to the customers who want their product. Never before has this opportunity existed, where key demographics can be targeted so specifically. In the past an advertiser selling chocolate would define that they might want to target a female audience, within an age of 18-35, and ones that had families, so they would feed their products through the media on TV programs and print media that had a high percentage of that type of demographic. Unfortunately, much of the time, they would miss their demographic, as it fell outside that range, and also that even if they were in this range, there’s no guarantee that they would actually like chocolate, or considered it to be something that they would consider purchasing. And so the Web companies have tried a few times to really try and understand us, and target.

When our society started to exchange good for money, it become obvious that they would made their products could go and sell them, but the problem comes when they wanted to sell in other regions, they would require people to do this, and can take them away from what they did best: actually making the product. So our commerce infrastructure provides us with seller who will take products and then sell them to the end customer. The seller then has the opportunity to promote certain good for some favorable commission, and both the provider and the seller benefit. Often too a product can be promoted by a third party: a marketeer, whose function is to lead the customer to the seller. Again a commissions is paid, based on the evidence that they found the customer, every time the customer is lead to the seller. Obviously the marketeer could just promote the seller, and lots of customers could come to shop, and not actually purchase anything, so there could be a “finders” fee for the number of customers that they bring to a store, but a high fee would be paid for a “finder and sale” fee, where a customer actually goes ahead and purchases something.

On the Internet, this has become a major industry. In the past we have seen major advertising campaigns on TV and print media, and we can often spot the intention of companies to target us, and get us into stores, and make purchases. On the Internet, the targeting is razor sharp, and the link between the product and the customer is now serviced by a whole range of stakeholders, including: Transaction verifiers, Brand Monitors, Web-traffic analytics, Affiliate Platforms and Campaign Verification (Figure 1).

aff

Figure 1: Affiliate marketing

Pay per click or per purchase?

For users the mining of the data is generally fine, and Facebook, and many other Internet-focused companies, especially Google and Amazon, extensively mine our data and try to make sense of it. For this symbiotic relationship, the Internet companies give us something that we want, such as free email, or the opportunity to distribute our messages. What was strange about this one is that the Facebook users were been treated in the same way as rats in a laboratory, and had no idea that they were involved in the experiment. On the other hand it is not that much different from the way that affiliate networks have been created, and which analyse the user, and try to push content from an affiliate of the network, and then monitor the response from the user (Figure 1).

We increasingly see adverting in our Web page accesses, where the user is matched to their profile though a cookie, and where digital marketing agencies and affiliate marketing companies try and match an advertising to our profile. They then monitor the success of the advertising using analytics such as:

  • Dwell time. This type of metric is used to find out how keen the user has been before clicking the content.
  • Click-through. This records the click-through rate on content. An affiliate publisher will often be paid for click-throughs on advertising material. This can lead to click-through scams, where users a paid to click on advertising content on a page.
  • Purchases. This records the complete process of clicking-though, and the user actually purchasing something. This is the best level of success, and can lead to higher levels of income, and in some cases to share a percentage of the purchase price. Again this type of metric can lead to fraud activity, where a fraudster will use stolen credit card details to purchase a high price item through a fraudulent Web site, and use this to gain commission from an on-line purchase (which is then traced to be fraudulent at a future time).

The targeting has been a slow process of evolution, where advertisements are placed within Web pages and which are often untargeted, which is similar to defining where you would to advertise in the print media, and then placing your advertisement there. Normally the marketing team would have a strategy, and then define which Web sites best match their demographic, and how likely they would be to purchase from that site, and place advertisement on the Web site. We can see in Figure 2 that the advertisements often seamlessly integrate into Web sites, and can fit themselves into whatever area is defined for them. In this way a Web designer can live a space on the page, know that it is be filled-in later. The advertisers normally even give some customisation to make sure it fits in with the general layout of the page.

Screen Shot 2014-08-17 at 08.37.20

Figure 2: Integrated advertising

This type of marketing is fairly untargeted, and the user accepts it, as they are not being tracked in any way for the advertisement, and is similar to the way that a newspaper would have advertisements, where the user does not need to read it.

100% Target

The other type of approach, which doesn’t scare the user into thinking they are being tracked, is to place the paid links (per click or purchase) through the search results (Figure 3). Many users have had their Web search page redirected, such as with Conduit and with a whole range of re-directors embedded into freeware software. It is thus an excellent opportunity to advertise products, without the user knowing. For Google they have, at least, marked the advertising links. Unfortunately for Google (and the marketing companies) users tend not to click on the paid links. So another method had to be found, and that choice is AdChoice.

Screen Shot 2014-08-17 at 08.09.44.fw

Figure 3: Promoted links

We all have seen the benefit from Amazon, where we are recommended products that we have previously purchased, or have been looking at. It basically allows Amazon to provide a better services, and often jogs our memory. This though is done with the user’s content, as they log into the site, and the site have gained the trust to track the user and build up a profile on them. What scares users is when they purchase something from one site, and find out that they are being targeted on another site with something that they bought. This is cross-pollination of user profiling. Obviously this can happen where one company sells on the profile data of their users, and then this is mined for their interested, but there is now a much more targeted operation going on – AdSense- and it is one which cross pollenates.

So have you noticed that you have looked for a new hard disk on a site, and then a few days later, you see an advertisement from your news site, with an advertisement for a disk that exactly fits your interests. This is AdChoices working in the background, and analysing your profile, and feeding targeted adverts to you, as they know that users often surf for ideas, and don’t purchase straight away, so the adverts become jog points for your memory.

Figure 4 shows an example. Over the past few days I’ve had problems with Office 365 on my Mac desktop, and I’ve searched around the Web looking for fixes. Today when I recall a page from an online newspaper, it integrates an Office 365 link. In fact wherever I go, it seems to think that Office 365 is the most important thing on my mind. Previous to this I had been considering purchasing a Microsoft Surface Pro 3, and often the adverts I’ve been pushed are focused on this product. Unfortunately I got sick of seeing these adverts, especially as the ones pushed to me had an animation which highlighted the product range, and that look like some of the nasty adverts of the past (such as You are the Millionth User of this Site), and I went and bought an Android device, instead.

Screen Shot 2014-08-17 at 09.17.05

Figure 4: AdChoices

If you look back at Figure 2 (for the Celtic FC site), you’ll find a Network Monitoring advertisement. This appears as I’ve been doing some research on visualising network traffic and log files, so Google has pushed a networking monitoring package, so it’s assumed that I’m in the market for some networking monitoring tools. Having spoken to many other people, they too are observing that searching to buy beds on-line, will cause a whole lot of adverts for beds, on sites that have nothing to do with this. Thus Web sites are becoming places that can push you products for something that they don’t even have on-show, and do not sell.

So it looks like the Internet has the perfect tool of the marketeer, as only Google can really know what we are doing. It’s a powerful system, and they guard against destroying user trust with the policy on matching as:

  • The types of websites you visit and the mobile apps you have on your device.
  • The DoubleClick cookie on your browser and the settings in your Ads Settings
  • The websites and apps you’ve visited that belong to businesses advertising with Google
  • Your previous interactions with Google’s ads or advertising services
  • Your Google profile, including YouTube and Google+ activity

Which basically says that whenever you using Google, such as for searching, watching online content and doing some social media thing with them, they are watching you, and trying to understand your likes. They then use a special cookie to then track you on their affiliate network, and do the magic of matching in the background. Notice too that mobile apps are used for gathering information, as these devices contain a whole range of information that defines our behaviour, including how often we search the Web through the day, and how this changes.

A key statement is “Your previous interactions with Google’s ads or advertising services”, which basically says that they are monitoring on our clicks and follow-throughs on your interaction with adverts. So just because the advert is there, and they you have an interest in the product, if you are not clicking on it, it’s a waste of space. So you need to watch which adverts you click on, as it puts a big tick in the box of your likes if you do. With Facebook you see the mining up front, but then get the opportunity to define that the advertising material is not quite what you want, but the future is towards an automated machine algorithm, and less user choice. The Web you’ll get, will be the one that your Web company has planned for you, and the concept of jump off one site to the next is going, as affiliates are creating networks where they can cross-pollenate your profile.

So it’s a very strange matching service that is going on, and it is one example of how companies aim to gather you tracks over the Internet, and understand how you live and what you like. In that way they can target just you. It’s a very fine line that these companies walk, as some companies who integrate into your browser as accused of spying on us, so Google better watch and tread carefully. For Google they have guarded against losing user trust with:

  • Not linking your name or personally identifiable information to your DoubleClick cookie without your consent.
  • Not associate your DoubleClick cookie with sensitive topics like race, religion, sexual orientation or health without your consent.

So they don’t give any your name, but that you are a target customer, and they don’t give away sensitive information about you, but obviously “sensitive” information could relate to your shopping habits too.

So, finally, let’s search for do some searching for “beds” to purchase, and access our horoscope. The result is shown in Figure 5, which seems to throw back your browsing history for all to see. So you can see that, on the Internet, that horoscopes can now even predict what you are likely to buy next. We really would require tunnel vision not to see the products recommending from this horoscope.

bed.fw

Figure 5: Your horoscope is personalised with a special interest in your preference for buying beds

Conclusions

Like it or not, you are interesting to a whole range of companies, and there has never been such an opportunity for them to learn everything about us … when we get up in the morning, what we have for breakfast, how we get to work, what type of software we us, what brands we like, … and so on. In the past companies used survey data to understand the audience, so that they could target certain advertising channels. Now Google knows whether you’re a sensitive soul – watching Great British Bake Off on YouTube – or like driving fast – with Top Gear repeats. So we are all being observed, and there’s a whole lot happening about you in the background, and it’s all patched up with a little cookie that is dropped on your machine. The EU tried to do something about this matching and the consent for around this, but it didn’t stick, as it was often a waste of time for users to understand how the cookie was being used. So marketeers love those cookies … and not the chocolate ones!

The Internet is almost predicting our needs before we have even thought about them. A search that I did for an Audi A4 warning light, resulted in adverts for new cars, which, I assume the car companies perhaps hope that we might be thinking of getting rid of our car on the first signs of a problem, or that they can be messages in our head that it’s time for a new car. Who knows, but one thing that is sure, is that Big Data gives companies the opportunity to get a 100% hit rate (or once they can properly understand our searches). So watch out when you go searching for pile ointment, as you may get a whole lot of adverts that you might not like in your online.

There’s questions that you need to ask yourself about the information that the Internet is gathering on you, and it isn’t just IP addresses, it is now trying to understand you, and how you live. Did you know, for example, that your Android phone actually tracks your location as you move, as stores it in the Cloud, and that it also happens with your iPhone? Google and Apple thus know exactly where you work, and how you get to work, and then when you go for lunch … and so on.

If the user knows about the targeting, and agree with it, then this is fine. Unfortunately few people seem to know this behavioral analysis is going. Along with this, the user must be given the chance to opt-out. Unfortunately the current system is flawed, as it is not possible to opt-out of all the advertising:

Screen Shot 2014-08-24 at 11.18.16.fw

Spooks, Spies and Hogwarts … and the Dark Web

Introduction

We are so happy to receive an acknowledgement that our MSc programme was to be certified by GCGQ, and which validated much of the developments we’ve implemented (virtualised infastrutures, state-of-the-art modules, on-line lectures, and so on) and the standards that we’ve set.

We did, though, have to smile (and grit our teeth at times) when the headlines said things like: “A Masters in James Bond?“, “GCHQ names the Hogwarts for Hackers“, and “MSc … degree courses in spying“. For us nothing could be further from what is implied from the headlines, and generally we are training professionals to work at the highest levels of professionalism, and who will secure systems and investigate intrusions into systems. There roles will be to protect citizens and companies against things like Denial of Service (DoS), Intellectual Property (IP) Theft, and Fraud. Whilst any technology or method can have a flipside, most of our graduates will go into jobs which protect systems, and to build the new architectures for the future. Our graduates will often be the creators, the defenders and the protectors.

bond

For us, we’ve tried not to trivialize computer security by taking about hackers, and often use more professional words like intruders, as hackers often imply guilt, and give a certain perception of maliciousness, before any intent has been proven. A simple ping of one computer to another can be used to find out if the computer is on-line or not, while on the other hand, it might be perceived as a probe of the computer of malicious reasons.

For us, we are technical specialists, and can recommend the course of action of intrusions into systems, and, in criminal acts, it is often up to others on the actual guilt of someone, and our role is to report in a fair and honest way. This involve carefully articulating key technical terms, as the general public often struggles to get past fair simple concepts, such as an IP, and have very little chance of ever understanding complex cryptography methods, and all the security instruments that are integrated into systems.

The Dark Web

So when we read of the Dark Web, and it portrays an Internet that does really exist, were users pass each other messages, and store files in a place that no-one can get to. It’s a scary place full of criminals and people that are out to do bad things. Basically the Dark Web is a network infrastructure which secures that communication between one computer and another, and then uses host computers to create the channel. In itself this is not a bad thing, as the internet was created so that anyone can either create their own network – with an internet which uses private addresses – or connect to the Internet where network packets are routed to globally defined network addresses. There is nothing to say that users cannot secure their own communications, and pick their own routes through the network. Basically the flaw is that the protocols that have been used on the Internet, are flawed in themselves, as they are often clear text, where anyone with sniffing software, and access to the data packets as they flow through the network can see their contents. So all the old protocols such as TELNET (remote access), HTTP (Web), FTP (file transfer), and SMTP (email) are all being moved towards their secure versions: SSH, HTTPS, FTPS and SMTPS, each of which are more secure, and how they should been defined in the first place. The key objective in the creation of the Internet was just to get computers connection, and security was rarely talked about.

RFCs

This lack of concern about security in the initial creation of the Internet is highlighted with RFCs (Request For Comment) documents, which were the way that standards such as for HTTP and email become accepted quickly, and where organisations such as DARPA posted their thoughts for a standard, and received comments back, and revised them. Industry could then go ahead and implement them, without the massive overhead of taking them to international standards agencies like the ISO (International Standard Organisation) or IEEE. With these agencies, a standard would take years to develop, and often involved the tinkering from countries, in order to protect their industries, and thus often stifled innovation. Some classics exist, which have provides the core of the Internet, including RFC 791 which defines the format of IP packets (IPv4) and RFC 793 which defines TCP (Transport Control Protocol), which define the foundation of the virtually all of the traffic that exists on the Internet. Many protocols, although now limited, become de-facto standards, and have moved on little since, including HTTP 1.1 which was initially created as RFC 1945. The lack of thought to security is highlighted by the fact that it took to RFC 1508 before the word “Security” was included in the title (Sept 1993), which was more than 12 years since the IP packet definition (Sept 1981).

The right to remain anonymous

On As we move into an Information Age, there is a continual battle on the Internet between those who would like to track user activities, to those who believe in anonymity. The recent Right to be forgotten debate has shown that very little can be hidden on the Internet, and deleting these traces can be difficult. With the right to be anonymous at its core, the Tor project created a network architecture which anonymized both the source of network and the identity of users.

Its usage has been highlighted over the years, such as when, in June 2013, Edward Snowden, used it to send information on PRISM to the Washington Post and The Guardian. This has prompted many government agencies around the World to prompt their best researchers to target cracking it, such as recently with the Russian government offering $111,000. At the core of Tor is its Onion Routing, which uses subscriber computers to route data packets over the Internet, rather than use publicly available routers.

One persons’ terrorist is another’s freedom fighter

There’s a well known saying that “One person’s terrorist is another’s freedom fighter, and the Tor network falls behind this, with the media painting it as a place where all the bad people go. I once attended a talk from a security consultant related to the Dutch police, and he outlined a slide such as the following, where there are obvious examples of evil actors (such as Adolf Hitler), others have went from terrorists to freedom fighters (such as Mahatma Gandhi and Martin Luther King), as so in the discussion over Tor – aka the Dark Web – there’s two sides, and where the media has latched onto the negative side.

terror

With the Tor network, the routing is done using computers of volunteers around the world to route the traffic around the Internet, and with ever hop the chances to tracing the original source becomes reduces. In fact, it is rather like a pass-the-parcel game, where game players randomly pass to others, but where eventually the destination receiver will eventually receive the parcel. As no-one has marked the parcel on its route, it’s almost impossible to find out the route that the parcel took.

The trace of users access Web servers is thus confused with non-traceable accesses. This has caused a range of defence agencies, including the NCA and GCHQ to invest methods of compromising the infrastructure, especially to uncover the dark web. A strange feature in the history of Tor is that it was originally sponsored by the U.S. Naval Research Laboratory (which had been involved in onion routing), and its first version appeared in 2002, and was presented to the work by Roger Dingledine, Nick Mathewson, and Paul Syverson, who have since been named, in 2012, as one of Top 100 Global Thinkers. It since received funding from Electronic Frontier Foundation, and is now developed by The Tor Project, which is a non-profit making organisation.

Thus, as with the Rights to remain private, there are some fundamental questions that remain, and it a target for many government around the World. In 2011, it was awarded the Free Software Foundation’s 2010 Award for Projects of Social Benefit for:

“Using free software, Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the Internet while keeping them in control of their privacy and anonymity. Its network has proved pivotal in dissident movements in both Iran and more recently Egypt.”

So what’s so hidden about the Dark Web?

So what is the “Dark Web” … well a computer to be accessible it normally must have a global IP address, which means that it can be accessed by anyone of the Internet (obviously security restrictions stop this in many cases, especially where computers exist within private networks – such as on a home wireless network). Thus the dark web still has computers which are accessible by the whole of the Internet, it’s the route to the destination that is hidden. This is the same as using any site with https at the start of the Web address, where the communications are protected for the whole of the path between the user and the Web site – which is actually what Google does whenever you search for a term. It is also the way that organisations identify themselves (such as PayPal in the example below), and make sure that an intruder does not interfere with the communications.

pay1

Conclusions

So we’re not generally training spies, or spooks, and focused on well grounded professionals who understand who to defend and respond to threats, and investigate in a fair and honest way. In terms of the Dark Web, there’s a bit of a nievity in the general public about this, and more needs to be done about this, as people may be acquosed of crimes by using generalisations.

A typical usage of Tor is to hide the tracks of an operation, and this is a typical defence mechanism for criminal gangs, so the Dark Web actually becomes a Dark Tunnel, where the intruder connects to a computer that is on the Internet, and accessiable by others.

So … at least the media is interested in the subject, but we must watch for generalisation, as they can give the wrong impression, and within criminal investigations we need to watch that the naivety of the general public does not compromise investigations.

In Cyber Security research, academia is struggling to keep up

Introduction

Cartoon hacker with laptopCyber security is one of the fastest moving areas, where new vulnerabilities are found and acted upon within hours. For academia the traditional timescales for picking-up on things, and then moving forward is often measured in the time it takes to get a paper published and then disseminated, and then picked-up by the community. It is often this agreement within a community that moves an area forward, and can be measured in the time differences between conferences, where one presenter may outline a new method and the year later there are many researchers taking about it. Thus the timescales involved are often months, if not years.

Fast moving pace

Generally, academic research in Cyber Security, especially outside the US, is struggling to keep up with the latest issues. One of the key reasons for this, apart from the long delays involved, is that there is extensive funding from industry into probing and discovering new vulnerabilities, and in developing new and innovative security solutions. Academia is thus struggling to keep pace with industry and the demands of fast responses.

The race to find new compromises and make the headlines is often key to developing a strong reputation in the industry (while many of the existing players are working under NDAs with their customers so cannot publish the things they find). With OpenSSL, the Finnish security firm Codenomicon discovered the flaw around the same time as it was found by Google, and even registered a vulnerability ID and a domain name, before it they released the information on it. For them, they knew that there was a strong business model in gaining a strong reputation for leaders in the area. With Heartbleed, the issue was mainly over by the time that academia could get themselves into gear, which is strange in the scientific community, where in many research areas a problem is found, and it can take years for academia to test out new methods of addressing the problem, and present new ideas (with all their associated evaluation).

Another example of where there is a strong business model for Cyber security research related to CyberVor, the Russia based Cyber gang, who had stolen over 1.2 billion usernames and passwords, and millions of associated email addresses. The company who discovered the operation, Hold Security, have since said it will charge $120 (£71) a month for a “breach notification service”.  This must make the service one of the most profitable ever created, as the gathering of data across the involves minimal costs, and although there is a cost in maintaining the software to gather the agents, the toolkits for gathering the information are fairly well developed, and it becomes an integration development, with the support for their clients to add the details that there company is interested in. Hold Security then sit back, and monitor the Internet, and pick off any data which pin-points the company.

Again, the Boleto fraud in Brazil, it was a company (RSA) who announced a large-scale fraud of Boleto Bancário (or Boleto as it is simply known), and which could be the largest electronic theft in history ($3.75bn). Overall with the fraud there were nearly 200,000 infected IP addresses that had the infection on their machine.

Access to Data

One of the major barriers for academia is the access to real data, and the ethics involved in dealing with this. For many companies, they have direct access to data feeds coming on from companies, and can aggregate these together to analyse trends and pin point issues. They can then feed these to their R&D teams to work on new ways to address the problems. In this way companies are generating new IP, which they keep and sell-on into their services. For academia, this is a new world, where they do not have the privileges of accessing the same data as everyone else, and it industry which is making the massive leaps within Cyber Security.

The problems caused by possible ethics issues was highlighted with the recent suspected compromise of the Tor network. For this academic researchers identified an underlying flaw in Tor’s network design, and which has led the Tor Project has warned that an attack on the anonymity network could have revealed user identities. This message was in response to the work of two researchers from Carnegie Mellon University (Alexander Volynkin and Michael McCord) who exploited the infrastructure. At present SEI has a Defense Department until June 2015, and is worth over $110 million a year, with a special target on finding security vulnerabilities.

Overall the attacks ran from January 2014, and were finally detected and stopped on 4 July 2014. In response to the vulnerability being found the Tor team, in a similar way to the OpenSSL Heartbleed announcement, where informed that the researchers were to give a talk at the Black Hat hacker conference in Las Vegas. The sensitives around the area is highlight by the fact that the talk was cancelled, due to neither the university nor SEI (Software Engineering Institute) approving the talk. The Tor project, through Roger Dingledine blog entry on 4 July 2014, revealed that identities could have been revealed over the period of the research.

But Academia looks at long term issues…

While there is always an argument around academia looking at the longer term issues. Unfortunately, in the current landscape, it is the ability to look at the new things that are evolving on a short term basis, and the longer term things are often disappearing as new cracks appear. I am reminded of one journal that stopped accepting cryptography papers, as these papers, while novel in their approach, had very little to contribute to the existing methods, and there was very little chance of the methods ever being used in systems. While they were interesting to a closed community, they did little to move forward barriers of science. Unfortunate too, academic is often measured on its citation count, and not necessary on its ability to address key fundamentals areas within cyber security. At present, few people can actually see beyond the current vulnerability, and many area struggling to see the three to five year horizon for research, let alone look at the ten year vision. The cards, though, are very much with industry just now, as they have access to real-life systems, and can see the new methods than intruders can use to break systems.

And in the UK …

In the UK there has been moves to match-up security agencies, such as through GCHQ and the NCA, but perhaps it is the linkages to industry that need to be developed further, as industry often has access to data, and will see lower-level threats. At present, there are some forums for academia and industry to come together and discuss issues, but there needs to be an increase in the frequency of these, so that industry and academia keep each other synchronised on the key focal points, and how research work might be focused. Issues around sharing data is, though, a major barrier in Cyber security, and it is one that is not easily fixed. Threats to industry and within high risk businesses such as in health care, energy and finance are key areas that require a close collaboration between industry and academia, especially in creating new information architectures that can scale, but also which are inherently secure.

As has been seen in the US, the tension between funders and academic research may become an issue, so it is thus important the academia understand the commitments it makes to funders in Cyber security research, and make sure that ethics and permissions are granted at the right time, as dissemination and the related impact of work is a key assessor in academic excellence. The current model of assessment will struggle to fit, as sensitive work within Cyber security is often closed for publication within peer reviewed conferences, especially in the cases where the core data and methods must be reviewed.

Conclusions

Cyber security is one of the fastest moving industries ever created, and often issues are exposed and acted upon within days or hours, thus academia needs to try and keep up, and work with, industry, as issues can appear and disappear within a blink of the eye, without any input from academia. The strongest bounds that we have just now is our links to our collaborators, and these allow us to keep up-to-date both in terms of our teaching and our research. Academia, in Cyber research, needs to listen more to industry, and learn about the problems that need to be solved. I appreciate that this does happen in academia, but in Cyber Security, it is a fundamental thing, and needs to be a continual dialogue.

The Two Sided Goldmine of Computer Security

Introduction

The business model of finding

This week it was released that CyberVor, the Russia based Cyber gang, had stolen over 1.2 billion usernames and passwords, and millions of associated email addresses. The company who discovered the operation, Hold Security, have since said it will charge $120 (£71) a month for a “breach notification service”. This must be a fine case of both sides of a balance sheet, where companies are thriving from the scare factor of their data being released on the wild, as one leaked email address and password can lead to a jump-off point on an organisations network.

In terms of the $120 per month subscription, this must make the service one of the most profitable ever created, as the gathering of data across the involves minimal costs, and although there is a cost in maintaining the software to gather the agents, the toolkits for gathering the information are fairly well developed, and it becomes an integration development, with the support for their clients to add the details that there company is interested in. Hold Security then sit back, and monitor the Internet, and pick off any data which pin-points the company … pweh that is a fantastic business model! In fact, companies doing this, are actually using the same tools and the network distributed network for scanning and probing as the hackers do.

One area that we have covered in relation to the business model is that the data gathered from security monitoring can actually be used in terms of analysing business performance too. So that the security monitor of Web sales on a Web site, can actually be used to determine the dwell time of users between putting an item in the basket to the time that the purchase the goods. More information on using SIEM (Security Incident and Event Management) here.

The race to find new compromises and make the headlines is often key to developing a strong reputation in the industry (while many of the existing players are working under NDAs with their customers so cannot publish the things they find). With OpenSSL, the Finnish security firm Codenomicon discovered the flaw around the same time as it was found by Google, and even registered a vulnerability ID and a domain name, before it they released the information on it. For them, they knew that there was a strong business model in gaining a strong reputation for leaders in the area. The complete timeline of Heartbleed is here:

The flip-side

botThe flip side is that there are so many vulnerabilities out there, that it is almost trivial for intruders to go out and get information from companies, and to gain footholds on them. We have now created organisations which are built on data, and they kinda forgot that this has become their key asset, and now that some people actually need and have access to this asset. As long as there’s one human involved, there’s a chance for the leakage to happen.

Hackers now have a whole range of tools in their toolbox, where they can command a whole lot of proxy agents – known as a bot and controlled remotely as a botnet agent – who can do the vulnerability probing and data stealing on their behalf. Anyone listen to the network will not be able to find the original source of the probing, as it is done by one of the compromised agents. The creation of the botnet agent is often fairly simple for the hackers, as it normally involves sending a phishing email – such as with the link to an HRMC on-line Web link – and which compromises the system through an unpatched system. Common compromises include Adobe Reader, Adobe Flash and in Oracle Java, and where a backdoor agent is downloaded onto the compromised host, and then listens for events, such as logging into bank systems. They can also be used to send requests to remote sites, such as for the probing for usernames and passwords, and for DDoS (Distributed Denial of Service).

Possibilities for fraud

The profitable side for hackers was shown in July 2014, when RSA announced a large-scale fraud of Boleto Bancário (or Boleto as it is simply known), and which could be the largest electronic theft in history ($3.75bn). Overall with the fraud there were nearly 200,000 infected IP addresses that had the infection on their machine. A boleto is similar to an invoice issued by a bank so that a customer (“sacado”) can pay an exact amount of a merchant (“cedente”). These can be generated in an off-line manner (with a printed copy) or on-line (such as in on-line transactions).

Boleto is one of Brazil’s most popular payment methods, and just last week it was discovered to have been infected, for over two years, by malware. There are no firm figures on the extent of the compromise, but up to 495,753 Boleto transactions were affect, with a possible hit of $3.75bn (£2.18bn).

Boleto is the second most popular payment method in Brazil, after credit cards, and has around 18% of all purchases. It can be used to pay a merchant an exact amount, or, typically to pay for phone and shopping bills. There are many reasons that Boleto is popular in Brazil, including the fact that many Brazilian citizens do not have a credit card, and even when they do have one, they are often not trusted. Along with this the transaction typically has a fixed cost of 2 or 4 US dollars, as opposed to credit rates which is a percentage of the transaction (in Brazil, it can typically be between 4 and 7.5%).

The operation infected PCs using standard spear phishing methods, and managed to infect near 200,000 PCs, and stole over 83,000 user email credentials. It used a man-in-the-browser attack, where the malware sits in the browser, which included Google’s Chrome, Mozilla’s Firefox and Microsoft’s Internet Explorer, and intercepts Boleto transactions. The reason that the impact was so great, is that Boleto is only used in Brazil, thus malware detection software has not targeted Boleto, as it is a limited market.

The Web-based control panel for the operation shows that fraudsters had stolen $250,000 from hijacked 383 boleto transactions from February 2014 until the end of June 2014 (Figure 2). Of the statistics received, all the infected machines where running Microsoft Windows, with the majority running Microsoft Windows 7 (78.3%), with Microsoft Windows XP being the second most popular (17.2%). Of the browsers detected the most popular was Internet Explorer (48.7%), followed by Chrome (34%) and Firefox (17.3%), and the most popular email domain used to steal user credentials was hotmail.com (94%).

Bad code on Web sites

The reasons that this gathering information by the CyberVor gang worked so well points to three things: bad coding practice on Web sites (where the user input is not checked, and goes straight through to the database), the usage of proxy agents (bots) to both probe and gather the usernames and passwords, and phishing emails (which are used to compromise a host so that it becomes a bot. These three things make it so easy for intruders to target the collection of data, and then press the button, and wait. They then have a whole army of data harvesters, which are basically infected computers around the Internet. As long as there’s an unpatched system somewhere on the Internet, there’s the potential for bot to work on behalf of someone. As said previously in this blog, the three main targets are Adobe Reader, Adobe Flash and Oracle Java [Blog].

Unfortunately all three of the problems point to two things: humans producing bad code (not checking their code) and humans being silly (not patching their systems). So, as we will see, the root of many of the currently vulnerabilities is around XSS (Cross-site scripting) and SQL Injection, which are caused by human coders not understanding that there might be people that want to compromise their system. Developers are often under pressure in rolling-out their code, and only test their code for valid inputs, along with this they will often disable security controls when operating issues occur, and then forget to put them back in-place. A current target, though, is the copy-and-pasting of PHP code, which often is left unmodified and without any security checking. Often the figure of 100:1 is used in terms of defining the ratio of hours/effort spent developing the system (dev ops) against the hours spend operating and evaluating the system (sys ops). It is obviously that this needs to change, and in the post we’ll see some of the pointers towards this.

The Heartbleed vulnerability focused on a human coding error within the OpenSSL encryption library, and which highlighted that it is humans who often cause most of the serious security vulnerability. Often these vulnerabilities can be traced to poor software development methods or practices. For example, the Adobe hack which exposed nearly 150 million passwords, had several pointers to poor security practice, including the fact that users could select extremely weak passwords, and which could be easily cracked. Another standard process which Adobe missed was to add salt to the encrypted version of the password. This method of salting makes it much more difficult to crack passwords on a database, even when weak passwords have been used. n the Adobe Hack there were nearly 2 million users selected “123456” as their password, and once one password is cracked, every single other one with the same password is also cracked (note: a salted system does not reveal others which have the same source password).

One of the easiest methods to steal data from an intruder, and will often result in a success is to use either XSS (Cross-site scripting) or SQL Injection. With XSS, the intruder forces some script into the page to make it act incorrectly, and with SQL the page sends through an SQL command to the database, and which can reveal its content. If the developer does not check their code, or if they do not undertake a penetration test, the Web site can be a risk.

Conclusions

Computer Security is becoming the new darling industry, especially as data is becoming one of the key assets for organisations. There’s value in the data, and if there’s value, there will be people out there trying to get it, and create their own business model in use it. Like it, or not, we are all being monitored on the Internet, as companies want to find out everything about us.Just look at your Web browsing, where something you should a bit of interest on Amazon, actually starts to propogate itself through into other Web sites that you access. Everytime a company monitors you, your data is stored someone, and the cookie on your machine is only a touch point for this tracking.

Intruders can then build business models at scale, and use agents, running at fractions of a dollar in costs, in the Cloud to act as the paid data gathers, creating the profit models that could never have been conceived before the Information Age. At one time gangs would employ spies to go and photograph or steal documents, now it’s a little agent running in the Cloud, who becomes alive when an innocent person’s machine has been infected, or an instance spun-op in the Cloud for a short time and then gone. The trials to the controller are almost impossible to find, especially if they have used the Tor network for their access to the harvested data. As the potentials for this increase by the day, and the access to tools becomes so easy, so there are a whole lot of companies, small and large, gaining excellent business models. The one benefit for companies, though, is that the investment in security monitoring infrastructures, can actually payback many times over, as it becomes the key analytic engine for the company, providing real-time information on its operation.