The International Conference on Program Comprehension (ICPC 2011) was a charming affair, bringing together a diverse set of researchers from around the globe.
Here are some of my personal reflections:
Three Research Perspectives
Leon Moonen drew upon centuries of wisdom collected by architects and map makers to find design principles of wayfinding that could be applied toward software exploration and navigation interfaces.
Margaret Burnett showed how studying the barriers faced by a segment of a population (in this case gender) could be used to for making design improvements that benefit the whole population.
Margaret-Anne Storey reflected on the trials and tribulations of trying to bring a software visualization system (SHriMP) to become loved and adopted by professional programmers. In the process, she emphasized on the importance of working with cognitive theories, but also pointed out that our current set of program comprehension theories have long since expired.
The technical program was both interesting and strong with a nice mix of topics. Which was something, personally, I did not see at ICSE.
The program started out on a technical note: clustering, identifier splitting, smoothing filters to improve information retrieval (IR) tasks in software. The tradition of awarding best papers to IR papers also continued with “Improving IR-based Traceability Recovery Using Smoothing Filters” taking the price.
Another strong topic that emerged was the study of the interaction or work history of developers. Annie Ying looked at edit patterns and their relation of task difficulty. David Röthlisberger studying patterns of artifact relevancy for different types of programming tasks. Lile Hattori had an well-performed study studying the benefit of replaying code changes in the IDE in comparison to viewing differences in subversion.
There were also some really nice empirical studies and tools. Anja Guzzi gave a spunky presentation (one of the best talks) on collaborative bookmarks in the IDE (Check out Pollicino). Stefan Endrikat laid a critical eye over aspect-oriented programming (AOP) with a study that cast doubt over the purported benefits of AOP. Daqing Hou manually poured over newsgroup postings to get a better understanding of why developers have trouble about certain API methods.
The industrial challenge was a different but refreshing way of engaging researchers. There were two parts: first use current code comprehension tools to try find and fix three bugs in a robot controller part. Second, use those insights to write fictional emails to other stakeholders: customer support, another company tech lead, and CEO.
There were several interesting outcomes:
- No submission used a IR-based technique despite popularity in community.
- Two submissions used statistical debugging techniques, and both performed very poorly due to superficial understanding of the bugs.
- The simplest techniques worked the best: program slicing and code differences.
- The social aspect (emails) proved to be the most difficult aspect for participants. The challenge exposed the need to explore more of the ecosystem of program comprehension, and how different stakeholders may have different information needs that are not currently targeted by any research effort or tool.
We need to archive conference tweets. They’re already gone from twitter… UPDATE: Andrian Kuhn has archived them here.
Many researchers don’t have their slides, papers, or tools online!
I would like to see more IR-papers that focus on tool-building and describe the experiences of putting these tools in the hands of developers.
blog comments powered by Disqus