By 2035, all of Earth’s energy resources will be used to keep our computers running, according to Semiconductor Research Corporation. This is a problem that looms large and can only be solved by shifting to more scalable systems that reduce our reliance on power.
This faulty system is the microprocessor. Once upon a time, the microprocessor seemed like a miracle, increasing in speed year after year. In reality, Moore’s Law, which asserts that the number of transistors in an integrated circuit doubles every two years, was one half of the miracle. More transistors meant smaller transistors, which allowed for a continual boost in clock speed.
The other half of the miracle, was a concept called Dennard’s scaling. One would assume that adding more transistors would require more power to run all of them, but this not the case. The amount of power needed always stays constant–no matter how many transistors you add. So what’s the problem?
The problem is Physics. Moore’s law is slowing down and the wealth of transistors is sucking energy.
I remember a professor of mine lambasting Intel for their block-headed approach towards their chip making process, saying something to the effect of, “They just cram a bunch of transistors into their chip and call it i7.” Around the time my professor made that statement, Intel announced that it would decrease its chip launching frequency.
Intel probably stumbled upon the second law of Thermodynamics, which governs the flow of heat, and Newton’s Law of Cooling which deals with removing heat by taking into account factors like shape and heat capacity, among other factors.
Stuffing a chip with billions of transistors generates more heat than the silicon can handle. Blazing clock speeds also require more power, which then creates even more heat. So, clock speed must be sacrificed to produce robust cooling systems to combat the heat. This tug-of-war becomes increasingly inefficient as transistor sizes hurtle through Moore’s Law. In the end, we can only generate so much power to get those transistors to chill out.
So, What’s The Solution?
For Robert Wolkow, an atomic physicist at the University of Alberta, the answer is simple: go green. The key principles of green technology are sustainability, viability, and innovation. Wolkow and his research associates plan to boost processing speeds up to 100 times while reducing energy waste by 100 times. Science Daily quoted Wolkow saying, “Today’s electronics have reached a point of maturation and can’t be made any better. We have to stop using so much electricity to run our computers, and that means we need a drastic change in the kind of computers we use.”
The “drastic change” comes in the form of atom-scale devices, other wise known as nanotechnology. These would replace purely silicon transistors with much smaller, much faster tech. IBM expects to have “nano” chips out after 2020. Other electronics manufacturers like IMEC, Cadence, Intel, and Samsung have recently announced breakthroughs in their transistor designs, shrinking their transistors to 5 nanometers.
Though silicon still powers most of our electronics, nanotechnology is already being integrated into silicon transistors; In 2017, IBM announced that they used silicon nano sheets to develop a silicon transistor the size of 5 nanometers. Wolkow also spoke of a marriage between “new atomic-scale technology with the standard CMOS technology that powers today’s electronics, providing an easy entryway to market.”
What any developer can take away from this is that as the tech around us changes and moves on to more greener pastures, we as well must evolve our methods and become more efficient in our software development. Faster hardware creates the ability to create more complex tools, but the burden still lies with the developer to write efficient code that properly leverages these smaller, faster, greener computers.