Software Engineering: The Next 50 Years
It is not unreasonable to speculate on what the future of software engineering will look like in the next 50 years. Software engineering is still a young discipline, with almost a half of a century since the coining of “software engineering”. Although we could claim some sort of success by simply pointing out the software underlying almost every facet of today’s world, that success has not been consistently repeatable nor teachable. As we become dependent on trillions lines of code in the next 50 years, there is little comfort we still have no fundamental scientific understanding of how to create software.
TL;DR This post will speculate on possible directions and the challenges faced by the research and software engineering community that needs to start now in order to be relevant tomorrow. Read on for brains, massive engineering, and potty-training your programs…
A different kind of engineering
Many of the challenges faced by humanity in the next decades will require software that works at completely different scales and completely different constraints than today’s software. Previously, we’ve been able to make the distinction between programming-in-the-large and programming-in-the-small, when reasoning about the size of teams and types of tools needed to build software. While software continues to fit these situations, it is already diverging from these categories in several ways.
Massively distributed software engineering
To meet the grand challenges of humanity, we will have to learn to massively scale software development in entirely new ways or die trying.
The development of the Large Hadron Collider’s core software system spanned over two decades, with over 50 million lines of code. Given enough time and dedication, we can create successfully massively large software systems.
But, we also may be reaching our limit given our current methods and capabilities. In the United States, the recent software behind the health care insurance marketplace is a reported 500 million lines of code.
In the next 50 years, as governments increasingly turn legal policy and services into source code and public APIs, often created in the timespan of a president’s term, we must be prepared to build massively-sized software systems on a regular basis. This will often require cooperation of many diverse stakeholders.
At our current place, imagine how these would fare if needed in a few years:
- A government api to calculate taxes on all online purchases for any location?
- A distributed traffic regulation system for a network of self-driving cars and drones.
- A planetary asteroid deflector system in response to an asteroid mining operation gone horribly wrong?
Billions of disposable apps are created; most are used a few days.
As infrastructure for hosting data and software drops near zero, many instances of software can be created in a few hours, potentially scaling to millions of users, then discarded a few days later.
The primary challenge for this type of developer will be not in creation of the software but the management of its ecosystem: rapid iteration, instant distribution, insightful monitoring.
15 minutes of fame
Almost every developer creates an app with 15 million users one time in their career
For those seeking a more noble opportunity, there are currently about 18 million developers in the world with a population of over 7 billion, many living in extreme poverty. As the world slowly crawls out of poverty and gains access to second-hand and cheap techology, what can a few developers build for billions by 2030?
Software running behind much of the world’s infrastructure celebrate their first century of uptime.
Like old Roman aqueducts and roads still in use to today, some software essentially becomes eternal, even as the languages, tools, and people behind them are long gone. Other projects – massive in scale – but unable to amass collective resources, must instead plod along over decades. Current research ideas: Reverse-engineering, mining software repositories techniques, program understanding tools may not be enough to ensure the longevity or recovery of knowledge.
Hat tip to Adrian Kuhn and Spencer Rugaber
Neural-embodied and augmented programming
If today’s developer must know touch interfaces, tomorrow’s must know brain interfaces.
In the next 50 years, software development will have to account for the increased stressors on human’s mental capabilities. The increased complexity and scale of massive software, new domains, and inaccessibility of century-old software bump up against fundamental limitations in a human’s cognitive abilities. Age, its discriminatory counter-part, ageism, and memory decline has been an ever present stressor—as Neil Stephenson puts it: “Software development, like professional sports, has a way of making thirty-year-olds feel decrepit.”
Putting extreme measures aside (Instead of botox injections, some opt for yearly remyelination in Korea), developers will seek passive ways to enhance and augment their programming abilities using passive enhancers (caffeine++, TMS, nootropics) and choose from the many devices that offer neural and biological sensors (EEG, MEG, EMG, fNIR, pupillometry, galvanic skin response, image-based affective state). A common form factor, Second skulls, is used as part of their daily job (see SmartCaps, Darpa’s wish list). These devices are capable of delivering localized transcranial magnetic stimulation pulses that can prime and enhance particular brain regions and sensing verbal formations, mental imagery, mental load, alertness, and general brain state. The sensed signals further serve as feedback and input into the programming process as well as other tools, visualizations, and collected meta-data. Not only will developers need these devices, they will support other types of knowledge workers needing to use these tools as part of their work.
The pervasiveness of caffeine, growing population of students and programmers willing to experiment with brain-altering pharmaceuticals, complexity of software, increased global competition, and extended working years lends itself to a high probability of adoption by many.
On a darker note, crimes of the future will be tried based on brain prints, activity in the hippocampus triggered by crime scene recognition. Also, Gattaca sorting hats.
Programming-in-the-near and far
As software continues to displace entire industries and professions, many workers may seek new opportunities of work in localized and specialized markets. As just one example, see the quick emergence of driving services such as Uber and Lyft. With telepresence devices and rise of remote working, the distinction between near and far may blur.
Some developers will cater to a single user that will pay $$$ for designer software.
Designer development contends with game development as an enticing prospect for young ones entering the software industry. Designer software blends entertainment, hobbyists, fashion, home decor, and personal branding. Imagine the many retiring Silcon millionares that want the opulence of Bill Gates’ home automation, also extended to their hobbies and fashion: insane biking gear, smart bags and clothes.
Software trainers (data engineers)
Software trainers, show programs how to exist in our world.
Adam Lally gave a keynote describing the design and development of the Watson system. An interesting take-away was that most of the engineering effort was not software-engineering as much as data-engineering: data cleaning, feature extraction, data manipulation and massaging, and manual data source selection and training.
Classic software engineers tell programs what objects they would see, in exact shape, color and size. They state exact sequence of actions, what to say, and sometimes even how to clean up after themselves.
Software trainers, show programs how to exist in our world, train them, and teach them how to learn on their own. They give the programs the basics on how to recognize objects, but the programs must learn how to put the pieces together. Programs will learn actions through feedback and digital playgrounds, be fluent in multiple languages, visualizations, and sonifications and be potty trained at an early age. Software trainers don’t need to understand machine learning techniques, but apply them: Think the next step beyond google street view drivers.
There is already research in place for some of this future, while other research is just beginning. The following research areas will be essential for the future of software engineering in the next 50 years that I have been personally examining.
Neuroscience of Programming
Previously, you may have seen some of the techniques I’ve used to study interruptions of programmers. Several colleagues and I have already starting exploring how more techniques such as functional magnetic resonance imaging (fMRI), electroencephalography (EEG), functional near-infrared spectroscopy (fNIR), and other techniques that can yield insights into the inner workings of a programmer’s mind.
Understanding how programming works in the brain is not limited to theory building, but can have real downstream effects in improving education, training, and the design and evaluation of tools and languages for programmers. Finally, by understanding what it takes to program, better enhancers and sensors can be constructed in support of augmented programming.
Some near term questions:
- Can we finally validate and quantify the idea of “programmer’s flow”?
- What parts of the brain are uniquely activated during programming?
- How should software companies really interview software developers?
- Does teaching particular concepts such as design patterns change the way people fundamentally understand code?
Previously, we’ve studied how crowd documentation, knowledge created via blog posts and Stack Overflow, for better or worse is increasingly becoming consulted over official documentation. We’ve found:
- Developers may be getting as much as 50% of their documentation from Stack Overflow.
- More examples can be found on Stack Overflow than the official documentation guide.
- In web searches, Stack Overflow questions are visited 2x-10x more often than official documentation.
I believe that continuing to study and expand the capabilities of crowd-levels of development not only will better situate us to tackle massive software challenges, but enable long-tail developers to quickly create software for reuse and remix of other developer’s efforts. The techniques used to automatically mine, collate, and extract knowledge from online archives will become invaluable for developers having to maintain century-old software.
Some near term questions:
- Has Stack Overflow saved billions of dollars in programmer productivity?
- What interactions and incentives will encourage developers to contribute in the workplace?
- Colony collapse: How do you sustain crowd efforts and participation in software?
- How do we automatically mine, aggregate, curate development knowledge from repositories, sites, posts?
Workplace of Tomorrow
Many have dreamed of better programming environments. I believe smart desks, tabletops, gestures (see CodeSpace) will vastly improve in the future office. Still, I have seen far too many smart whiteboards, ambient displays, and other devices sit unused in professional developer settings.
More importantly, we need to rethink what interactions and tasks related to programming that need extra support in an expanded interaction space. There has been a recent resurgence in the redefining of expressiveness in programming and the idea of “live programming”, and “live coding”. These approaches might make the use of the surrounding physical environment more appealing for augmenting and providing feedback for the developer.
We initially created the first international LIVE programming workshop LIVE 2013, to help crystalize these ideas, but that’s just a start.
Start engineering tomorrow, today. Also please contribute your own vision and ideas below (or email)!
blog comments powered by Disqus