Are you forward thinking enough?

"640kb ought to be enough for anybody." – Bill Gates, 1981

That quote, while Bill Gates claims is apocryphal has nonetheless become cautionary shorthand for a situation where you arbitrarily limit your design under the inevitable false assumption that no one will ever need to exceed your design specifications.  I suspect that this quote is at least partially responsible for the decision to make the IPv6 address space as large as it is.  Back in the day, the designers of IPv4 almost certainly could have been overheard saying, since the number was so seemingly impossibly large anyway:

"4,294,967,296 IPs ought to be enough for anybody."

Fast forward to the 21st century and it turns out that they were spectacularly wrong and this resource is now almost completely exhausted.  "So much for forward thinking" their successors almost certainly thought.  So when IPv6 came along, I swear they just dared people to quote them.  I mean just look at what this theoretical quote would sound like!

"340,282,366,920,938,463,463,374,607,431,768,211,456 ought to be enough for anybody."

Alright, so what is my point to all of this?  I recently ran into a situation at work that required that I better understand how Windows memory management works.  Through that research, I ended up working with a Sysinternals tool called RamMap.  This tool gives tremendous visibility into exactly how your RAM is carved up.  But of course to make use of the numbers, you need to understand them.  This lead me to a video on Channel9 for the series "Debug Tools".  In this video, they give a lecture on how to use RamMap.  If you’re at all interested in Windows troubleshooting, this is a fascinating video and I recommend it.  You can can that here.  This then led me to another Debug Tools video on VMmap which yet another wonderful tool from Sysinternals.  This one focuses on understanding virtual memory.  During the video, the presenter made an off handed comment about why VMmap reported that the total available virtual memory pool was "only" 8TB. He explained this was because the CPU designers (ie Intel and AMD) decided that even though they are building a "64 bit" CPU, that we aren’t going to actually assign all 64 bits to the pipeline to make designing the chips easier. 

That caught my attention so I did some additional Googling and I found what I consider a fascinating paragraph from a blog (http://www.alex-ionescu.com/?p=50) that reads:

"The era of 64-bit computing is finally upon the consumer market, and what was once a rare hardware architecture has become the latest commodity in today’s processors. 64-bit processors promise not only a larger amount of registers and internal optimizations, but, perhaps most importantly, access to a full 64-bit address space, increasing the maximum number of addressable memory from 32-bits to 64-bits, or from 4GB to 16EB (Exabytes, about 17 billion GBs). Although previous solutions such as PAE enlarged the physically addressable limit to 36-bits, they were architectural “patches” and not real solutions for increasing the memory capabilities of hungry workloads or applications. 

Although 16EB is a copious amount of memory, today’s computers, as well as tomorrow’s foreseeable machines (at least in the consumer market) are not yet close to requiring support for that much memory. For these reasons, as well as to simplify current chip architecture, the AMD64 specification (which Intel used for its own implementation of x64 processors, but not Itanium) currently only supports 48 bits of virtual address space — requiring all other 16 bits to be set to the same value as the “valid” or “implemented” bits, resulting in canonical addresses: the bottom half of the address space starts at 0×0000000000000000 with only 12 of those zeroes being part of an actual address (resulting in an end at 0x00007FFFFFFFFFFF), while the top half of the address space starts at 0xFFFF800000000000, ending at 0xFFFFFFFFFFFFFFFF."

In other words, the specification for 64 bit processors, as I understand it anyway requires that they literally just carve off and render useless a giant chunk of the address range under the assumption that "meh, we have so much now, why would we ever need this?"  My mind immediately jumps to the loopback address in IPv4. All the designers needed was a single IP to accomplish their goal (i.e. the infamous 127.0.0.1) but since they had literally billions of IPs that they believed no one would ever need all of, they decided "meh, screw it" and just like that threw another 16+ million addresses away that no one can ever use. 

Will this be a problem in my lifetime?  At the rate computers advance, quite possibly.  Will we find a way around it and keep advancing the field?  Without a doubt.  It was just funny to imagine engineers at Intel falling into the same mental trap as all of those that came before.  What is it they say?  "Those that forget history are doomed to repeat it."

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.