Posted on March 22, 2013
The word ‘computer’ means someone or something who computes. A computer is a general purpose device that has developed a lot over the years. Could you live without a computer or any technology that uses the internet? Since the 1940’s, there has been four generations of computers with the pros and cons attached. So how did the first computer come about and how has it changed over the years? But also, how has it changed society?
The 1st Generation
The 1st generation of computers was from 1940 to 1955. Computers were powered by vacuum tubes and used magnetic drums to store data and memory.
The vacuum tube controlled the electric current through a sealed container. The container was often made of thin transparent glass in a rough cylinder shape. The magnetic drum is a metal coated cylinder with magnetic iron-oxide material which stored data and programs.
The ENIAC (Electric Numerical Integrator and Computer) was created by John Mauchly and John Presper Eckert. The ENIAC contained 17,468 vacuum tubes, 70,000 resistors, 10,000 capacitors, 1,500 relays, 6,000 manual switches and 5 million soldered joints. It covered 1800 square feet of floor space and weighed 30 tons. In one second, the ENIAC could workout 5,000 additions, 357 multiplications or 38 divisions. Because vacuum tubes are used instead of switches and relays in increases the speed of the calculations, but it took technicians weeks to re-program and the machine required many hours of maintenance – 24 hours a day!
There were many problems with the first generation computers, like many things when they start out. They were very expensive to make and run, slow and had limited application capabilities. They also generated a lot of heat and used large amounts of electricity, the ENIAC used 160 kilowatts of electrical power!
The 2nd Generation
Then in 1956, the second generation of computers hit. The vacuum tubes were no longer the best thing for running a computer, the latest component to replace the vacuum tubes is a transistor. The transistor had a major influence on the development of electronics as it was more reliable than the first generation. It made the computers smaller, faster, cheaper and more efficient to run. The first symbolic languages appeared and the backgrounds of high level programming languages were developed. The first versions of this was COBOL and FORTRAN. However, they were still generating a lot of heat which resulted in the damage of sensitive internal parts of the computer.
The 3rd Generation
The era of transistors was short lived because in 1964, integrated circuits became the next great thing in developing semiconductor technology. Integrated circuits allowed to place miniaturised transistors on silicon chips. And of course the better technology enhanced the speed and efficiency and was much smaller and cheaper to run and the computers were made to be more accessible and affordable to a mass audience. Punched cards and printouts were replaced by keyboards and monitors and the first operating system was created to allow multiple applications to run on one computer.
You can probably see the modern computer starting to take shape as the evolution of computers goes on. Computers were getting faster and using less energy and money to run and more affordable for the average person to buy.
The 4th Generation
The 4th generation is still the present generation of computers. When microprocessors where invented in 1971, the era of mass usage of computers began! The 4th generation was started by the Intel 4004 chip. Microprocessors have thousands of integrated circuits on one silicon chip. This chip contained the CPU, memory and input/output controls – a lot more advanced that the last 3 generations. 10 years later, IBM introduced the first computer that was dedicated to the home user. Then three years later, Apple introduced Macintosh.
The development of computer networks began and this lead to the invention of the Internet. In the last generation keyboards and monitors were invented, in this generation we saw the invention of the mouse, GUI and other features. The usage of microprocessors expanded and in present day, the majority of electronic devices used microprocessors or micro-controllers.
The Present day and the Future
It is amazing to see how the computer has developed, from a computer that filled a large room to what we have in the workplace and at home today – a smaller, faster version of the 1st computer. But how has the development of computers changed the way we work, find information and communicate. Think, when was the last time you found out something from a book or asked a specialist? We all know it is easier and quicker to type a few words into a search engine and get the answer within 1 second, because we have constant access to powerful search engines. Does anyone write anything down on a bit of paper with a pen anymore? Or do we tap it into our phones and computers?
As time has gone on, computers have had a dramatic impact on the way we think and work. Before the 1st computer was invented, ‘computer’ was a job description for people who performed calculations by hand and paper, and now those job descriptions have changed to SEO, IT Support or Data Analyst.
With computers, we are able to work quicker and more efficiently because of how advanced they are. They can store masses of memory and data and you can open up many applications on the internet to help with your work. Computers have also shrinked in size and become easier to steal – many people have invested in laptop and computer security to lock down their valuable computers and laptops.
Communication has changed because of computers as well. 9 times out of 10, we would email or socially interact with someone, using twitter or facebook instead of talking to them over the phone or simply meeting face-to- face. For business meetings, you can now ‘facetime’ other companies instead of travelling to a place to have a conference.
Have computers changed the world too much? Or are we comfortable with what we have and do we need the computer to advanced to another generation?
The Workplace Depot sells laptop security equipment and cable protectors
**UPDATE JULY 2015**
This post has been incredibly popular. So popular, in-fact, that we have decided to turn it into a visual infographic. Please click the banner below to be taken to our shiny new infographic page.
What’s Next in Computing?
The computing industry progresses in two mostly independent cycles: financial and product cycles. There has been a lot of handwringing lately about where we are in the financial cycle. Financial markets get a lot of attention. They tend to fluctuate unpredictably and sometimes wildly. The product cycle by comparison gets relatively little attention, even though it is what actually drives the computing industry forward. We can try to understand and predict the product cycle by studying the past and extrapolating into the future.
Tech product cycles are mutually reinforcing interactions between platforms and applications. New platforms enable new applications, which in turn make the new platforms more valuable, creating a positive feedback loop. Smaller, offshoot tech cycles happen all the time, but every once in a while — historically, about every 10 to 15 years — major new cycles begin that completely reshape the computing landscape.
The PC enabled entrepreneurs to create word processors, spreadsheets, and many other desktop applications. The internet enabled search engines, e-commerce, e-mail and messaging, social networking, SaaS business applications, and many other services. Smartphones enabled mobile messaging, mobile social networking, and on-demand services like ride sharing. Today, we are in the middle of the mobile era. It is likely that many more mobile innovations are still to come.
Each product era can be divided into two phases: 1) the gestation phase, when the new platform is first introduced but is expensive, incomplete, and/or difficult to use, 2) the growth phase, when a new product comes along that solves those problems, kicking off a period of exponential growth.
The Apple II was released in 1977 (and the Altair in 1975), but it was the release of the IBM PC in 1981 that kicked off the PC growth phase.
The internet’s gestation phase took place in the 80s and early 90s when it was mostly a text-based tool used by academia and government. The release of the Mosaic web browser in 1993 started the growth phase, which has continued ever since.
There were feature phones in the 90s and early smartphones like the Sidekick and Blackberry in the early 2000s, but the smartphone growth phase really started in 2007–8 with the release of the iPhone and then Android. Smartphone adoption has since exploded: about 2B people have smartphones today. By 2020, 80% of the global population will have one.
If the 10–15 year pattern repeats itself, the next computing era should enter its growth phase in the next few years. In that scenario, we should already be in the gestation phase. There are a number of important trends in both hardware and software that give us a glimpse into what the next era of computing might be. Here I talk about those trends and then make some suggestions about what the future might look like.
Hardware: small, cheap, and ubiquitous
In the mainframe era, only large organizations could afford a computer. Minicomputers were affordable by smaller organization, PCs by homes and offices, and smartphones by individuals.
We are now entering an era in which processors and sensors are getting so small and cheap that there will be many more computers than there are people.
There are two reasons for this. One is the steady progress of the semiconductor industry over the past 50 years (Moore’s law). The second is what Chris Anderson calls “the peace dividend of the smartphone war”: the runaway success of smartphones led to massive investments in processors and sensors. If you disassemble a modern drone, VR headset, or IoT devices, you’ll find mostly smartphone components.
In the modern semiconductor era, the focus has shifted from standalone CPUs to bundles of specialized chips known as systems-on-a-chip.
Typical systems-on-a-chip bundle energy-efficient ARM CPUs plus specialized chips for graphics processing, communications, power management, video processing, and more.
This new architecture has dropped the price of basic computing systems from about $100 to about $10. The Raspberry Pi Zero is a 1 GHz Linux computer that you can buy for $5. For a similar price you can buy a wifi-enabled microcontroller that runs a version of Python. Soon these chips will cost less than a dollar. It will be cost-effective to embed a computer in almost anything.
Meanwhile, there are still impressive performance improvements happening in high-end processors. Of particular importance are GPUs (graphics processors), the best of which are made by Nvidia. GPUs are useful not only for traditional graphics processing, but also for machine learning algorithms and virtual/augmented reality devices. Nvidia’s roadmap promises significant performance improvements in the coming years.
A wildcard technology is quantum computing, which today exists mostly in laboratories but if made commercially viable could lead to orders-of-magnitude performance improvements for certain classes of algorithms in fields like biology and artificial intelligence.
Software: the golden age of AI
There are many exciting things happening in software today. Distributed systems is one good example. As the number of devices has grown exponentially, it has become increasingly important to 1) parallelize tasks across multiple machines 2) communicate and coordinate among devices. Interesting distributed systems technologies include systems like Hadoop and Spark for parallelizing big data problems, and Bitcoin/blockchain for securing data and assets.
But perhaps the most exciting software breakthroughs are happening in artificial intelligence (AI). AI has a long history of hype and disappointment. Alan Turing himself predicted that machines would be able to successfully imitate humans by the year 2000. However, there are good reasons to think that AI might now finally be entering a golden age.
“Machine learning is a core, transformative way by which we’re rethinking everything we’re doing.” — Google CEO, Sundar Pichai
A lot of the excitement in AI has focused on deep learning, a machine learning technique that was popularized by a now famous 2012 Google project that used a giant cluster of computers to learn to identify cats in YouTube videos. Deep learning is a descendent of neural networks, a technology that dates back to the 1940s. It was brought back to life by a combination of factors, including new algorithms, cheap parallel computation, and the widespread availability of large data sets.
It’s tempting to dismiss deep learning as another Silicon Valley buzzword. The excitement, however, is supported by impressive theoretical and real-world results. For example, the error rates for the winners of the ImageNet challenge — a popular machine vision contest — were in the 20–30% range prior to the use of deep learning. Using deep learning, the accuracy of the winning algorithms has steadily improved, and in 2015 surpassed human performance.
Many of the papers, datasets, and softwaretools related to deep learning have been open sourced. This has had a democratizing effect, allowing individuals and small organizations to build powerful applications. WhatsApp was able to build a global messaging system that served 900M users with just 50 engineers, compared to the thousands of engineers that were needed for prior generations of messaging systems. This “WhatsApp effect” is now happening in AI. Software tools like Theano and TensorFlow, combined with cloud data centers for training, and inexpensive GPUs for deployment, allow small teams of engineers to build state-of-the-art AI systems.
For example, here a solo programmer working on a side project used TensorFlow to colorize black-and-white photos:
And here a small startup created a real-time object classifier:
Which of course is reminiscent of a famous scene from a sci-fi movie:
One of the first applications of deep learning released by a big tech company is the search function in Google Photos, which is shockingly smart.
We’ll soon see significant upgrades to the intelligence of all sorts of products, including: voice assistants, search engines, chat bots, 3D scanners, language translators, automobiles, drones, medical imaging systems, and much more.
“The business plans of the next 10,000 startups are easy to forecast: Take X and add AI. This is a big deal, and now it’s here.” — Kevin Kelly
Startups building AI products will need to stay laser focused on specific applications to compete against the big tech companies who have made AI a top priority. AI systems get better as more data is collected, which means it’s possible to create a virtuous flywheel of data network effects (more users → more data → better products → more users). The mapping startup Waze used data network effects to produce better maps than its vastly better capitalized competitors. Successful AI startups will follow a similar strategy.
Software + hardware: the new computers
There are a variety of new computing platforms currently in the gestation phase that will soon get much better — and possibly enter the growth phase — as they incorporate recent advances in hardware and software. Although they are designed and packaged very differently, they share a common theme: they give us new and augmented abilities by embedding a smart virtualization layer on top of the world. Here is a brief overview of some of the new platforms:
Cars. Big tech companies like Google, Apple, Uber, and Tesla are investing significant resources in autonomous cars. Semi-autonomous cars like the Tesla Model S are already publicly available and will improve quickly. Full autonomy will take longer but is probably not more than 5 years away. There already exist fully autonomous cars that are almost as good as human drivers. However, for cultural and regulatory reasons, fully autonomous cars will likely need to be significantly better than human drivers before they are widely permitted.
Expect to see a lot more investment in autonomous cars. In addition to the big tech companies, the big auto makers arestartingto take autonomy very seriously. You’ll even see some interesting products made by startups. Deep learning software tools have gotten so good that a solo programmer was able to make a semi-autonomous car:
Drones. Today’s consumer drones contain modern hardware (mostly smartphone components plus mechanical parts), but relatively simple software. In the near future, we’ll see drones that incorporate advanced computer vision and other AI to make them safer, easier to pilot, and more useful. Recreational videography will continue to be popular, but there will also be important commercial use cases. There are tens of millions of dangerous jobs that involve climbing buildings, towers, and other structures that can be performed much more safely and effectively using drones.
Internet of Things. The obvious use cases for IoT devices are energy savings, security, and convenience. Nest and Dropcam are popular examples of the first two categories. One of the most interesting products in the convenience category is Amazon’s Echo.
Most people think Echo is a gimmick until they try it and then they are surprised at how useful it is. It’s a great demo of how effective always-on voice can be as a user interface. It will be a while before we have bots with generalized intelligence that can carry on full conversations. But, as Echo shows, voice can succeed today in constrained contexts. Language understanding should improve quickly as recent breakthroughs in deep learning make their way into production devices.
IoT will also be adopted in business contexts. For example, devices with sensors and network connections are extremely useful for monitoring industrial equipment.
Wearables. Today’s wearable computers are constrained along multiple dimensions, including battery, communications, and processing. The ones that have succeeded have focused on narrow applications like fitness monitoring. As hardware components continue to improve, wearables will support rich applications the way smartphones do, unlocking a wide range of new applications. As with IoT, voice will probably be the main user interface.
Virtual Reality. 2016 is an exciting year for VR: the launch of the Oculus Rift and HTC/Valve Vive (and, possibly, the Sony Playstation VR), means that comfortable and immersive VR systems will finally be publicly available. VR systems need to be really good to avoid the “uncanny valley” trap. Proper VR requires special screens (high resolution, high refresh rate, low persistence), powerful graphics cards, and the ability to track the precise position of the user (previously released VR systems could only track the rotation of the user’s head). This year, the public will for the first time get to experience what is known as “presence” — when your senses are sufficiently tricked that you feel fully transported into the virtual world.
VR headsets will continue to improve and get more affordable. Major areas of research will include: 1) new tools for creating rendered and/or filmed VR content, 2) machine vision for tracking and scanning directly from phones and headsets, and 3) distributed back-end systems for hosting large virtual environments.
Augmented Reality. AR will likely arrive after VR because AR requires most of what VR requires plus additional new technologies. For example, AR requires advanced, low-latency machine vision in order to convincingly combine real and virtual objects in the same interactive scene.
That said, AR is probably coming sooner than you think. This demo video was shot directly through Magic Leap’s AR device:
It is possible that the pattern of 10–15 year computing cycles has ended and mobile is the final era. It is also possible the next era won’t arrive for a while, or that only a subset of the new computing categories discussed above will end up being important.
I tend to think we are on the cusp of not one but multiple new eras. The “peace dividend of the smartphone war” created a Cambrian explosion of new devices, and developments in software, especially AI, will make those devices smart and useful. Many of the futuristic technologies discussed above exist today, and will be broadly accessible in the near future.
Observers have noted that many of these new devices are in their “awkward adolescence.” That is because they are in their gestation phase. Like PCs in the 70s, the internet in the 80s, and smartphones in the early 2000s, we are seeing pieces of a future that isn’t quite here. But the future is coming: markets go up and down, and excitement ebbs and flows, but computing technology marches steadily forward.