From Twitter: @matseinarsen
September 30th, 2009 permalink
This is inspired by an article by chromatic from ages back, about your programming force multipliers. I think improving your learning is a major “force multiplier”, so this is about how to do that.
Observation 1: Most programmers are on a life-long learning mission.
Actually, all programmers are. Programming is about solving problems. You must pick up something from that.
Observation 2: Learning happens at different speeds, both between individuals and within the same individual.
One day you get the point of mod_perl and burst forward, after making CGI-scripts in the same way for three years. At the same time your colleague kept discovering new cool skills every week.
Accordingly, most programmers should be concerned not about their learning, but also their learning velocity. Your skill level is not only based on what you know, but how long it took you to learn it and how much more you can manage to squeeze in during your programmer life-span! If you’re ambitious, but your learning curve is slow or stuck, someone will put you on the sidelines. If your goal is to really understand something, you will be able to reach a deeper understanding if you can fit more learning into a short timespan.
So, this isn’t about using mind-maps, role-plays or fancy games at a training course. Rather, this is about tactics and strategies for life-long learning, or the kind of ten-year long learning it takes to create world-class expertise. I want to share something about what Psychology knows about making us learn more, faster, better and deeper – as well as some personal experiences – and hopefully get some feedback about people’s own experiences about increasing their learning rate.
So here are some points:
Avoid Arrested Development
You start a new job. You have an amazing learning curve while you pick up knowledge and skills from your new colleagues. Six months in you’ve found the comfort level at which you can do your job well. Six years later you are still at that level. Meet Arrested Development. This is why some people are Formula 1 drivers, while most people drive their car good enough to get them to work and the beach.
Getting out of arrested development might not be easy. One key is to avoid automatic behaviours. In coding, this is to be aware of your habits and trying to break them up. Starting every day with opening a secure shell to the server and starting vim? How about learning to use Padre instead? Always firing up CGI::Application for every new web application? Learn Catalyst, HTML::Mason, even Ruby on Rails.
Or push your limits. Try taking on larger responsibilities. Take on a project that is larger, more complex or more difficult than anything you’ve built before.
The problem with getting out of arrested development is that it might take unlearning of the comfort level, and actually decrease your productivity and even understanding temporarily. It’s actually quite easy to explain in a programming setting: If you’re moving on from CGI::Application to Catalyst, your first web application is going to take longer than normal to develop. But your next one might be a big step forward for both you and human-kind.
(Although probably mostly for you)
It is found, again and again, that taking risk is a driving force in learning. In experiments in controlled learning environments, it is regularly found that being willing to try out more, click more buttons and do individual try-and-see experimenting is clearly correlated with higher outcomes in learning. And it’s just because the risk-takers will get more learning opportunities, they will simply see more of how things work.
So make sure you have a safe testing environment, a box for experiments and wild ideas. Try to break things that work, find edge cases or do things in a new way. Have a machine you can re-install without losing your precious collection of photos. Write some crazy ideas on your blog. Try and see what happens. But most important: Create a technical environment that is conductive to risk-taking, just as much as a social environment. If your development server is a sacred cow or people are dependent upon it, set up a crash-test server.
I think a reason Test Driven Development is working is also because it forces you to think about and try out your software. Writing tests is also a great way to get to know new packages or software suites, you can take risks you would never imagine doing while developing code. This method takes two scalars? I wonder what happens if I give it a 10.000 element list of japanese characters!
And let me repeat this: Take on a project that is larger, more complex or more difficult than anything you’ve built before.
Learn the right things
Some knowledge needs pre-requisite knowledge. It’s just a fact of life. If you try to learn how to make 3d graphics, it will take you a lot longer if you don’t know vector mathematics. (And you might want to look at something other than Perl for implementation..)
It’s pretty obvious stuff.
What is less obvious is what the optimal learning direction in computer programming actually is. In Perl, is there a learning run that is better than others? A sequence of reading perldocs that is more effective than another? I can certainly imagine some bad and good places to start, but everyone can do that. What is more unclear is what happens after that start. When you’re half-good and can do what you want, but want to expand, what is the most optimal directions to follow?
I actually don’t have a good answer to that yet. My only advice would be that if the terminology is getting too tricky, it is time to go back a step. And not only to..
Learn the terminology
When you meet a new term, don’t fill in it’s meaning from the context. One of my own most immediate improvements in learning came from always immediately looking up words I don’t understand. I believe there usually is a reason I don’t know the word, and often I’ve had some surprises where my “context fill-in” actually was way off.
Also, as suggested above, it you’re missing too much terminology, it might be a hint you are in over your head and need to gain the pre-requisite knowledge.
Learn something new outside Perl
Everyone will tell you this. Stuck on Perl? Get some new vibes from Ruby. Your C-coding is getting you down, try Scala (should get you back to C soon enough). And you can go further. Adam Kennedy says he gets inspiration from places like TED conferences, New Scientist or Economist. Some people will suggest just learning anything can get you out of a rut.
However, there is a condition. If it is going to help your programming, it must involve some sort of domain-transferable skills. Not everything new you learn will necessarily do that. I’ve been doing digital photography for nearly ten years and have gotten to a decent level. It has absolutely not improved my programming in any way. It is far too domain-specific. Getting a degree in psychology, however, has plunged me forward on a lot of levels, from new learning skills to seeing new ways of solving problems and learning about how the brain does biological information processing. Studying Zen sometimes provides me with a focus that is very conductive to understanding difficult subjects. The same with learning mathematics, particularly discrete mathematics.
So consider well what you learn. Pick the right thing, and you might gain a unique perspective on programming or just sharpen your thinking. Pick the wrong thing and you are wasting good learning time on taking vacation shots.
Of course, you might want a whole and fulfilling life filled with culture and art too, but that’s not what this text is about.
Accessing other peoples experience
Talk with people, and talk with them, not to them. Unless you are Ada Lovelace or Charles Babbage and invented this stuff, learning programming involves transferring knowledge from another human’s brain into your brain (I know some of you will lament this fact). One of the most effective ways of doing this, and the one way we are the most biologically disposed to, is talking combined with listening. Reading is a hard-learnt skill that is piggy-backed on top of the visual system the last some-thousand years. Conversation, on the other hand, has a significant part of your head dedicated to it. Use that!
Study the code of excellent programmers and learn from it. Avoid the code of bad programmers, and if you have to look at it, don’t learn from it. Perl is fantastic for this. As well as being a great code repository, CPAN is also a fabulous learning resource. In addition, the core modules are often very good and thorough code that has been optimized and looked after a lot. If you bring your laptop on the train and lose the connection to the net, study a core module.
The problem of tacit knowledge. A problem of accelerating training towards an expert level is that the real expert knowledge is not the same as the knowledge written down in documentation and books. Rather, what actually sets a novice or an expert apart is unknown, or at least not explicitly known, to either. If you just squeeze 10 years worth of reading into someone in five years, they might just still be at the 5 year level in real skill, but with a lot of knowledge they don’t really know how to apply. This is related to the concept of declarative or procedural memory, or that what you read and what you do is remembered in completely different ways and places in your brain.
That, for example, is what things like the “variable roles” mentioned in a previous post try to encode. They find how experts understand and use variables, and try to teach that to beginners instead of giving them the basic intro and letting them get the experience the hard way. How do this apply in a life-long learning perspective? Study the how and what people do, not only what they say (or write). Also: Study design patterns. Don’t necessarily base your own software designs on them, but know them, as they encode expert experience that previously was tacit.
Increase your physical learnability
Up until recently, we thought you grew new neurons (brain cells) up to certain age, then it stops and from then on it went down hill. It is not the case. At least in the hippocampus, which is the center of forming new memories, neurogenesis is found to take place all through life.
But there is a condition…
It takes physical exercise, namely aerobic training – that means running, biking, rowing or anything that makes you pant and sweat for an extended period of time.
I’m sorry… but it will increase your learnability…
Furthermore, your brain needs the right chemicals to actually form new neurons. One major compound that can’t be synthesized by your body from normal foodstuffs is omega-3 fatty acids. Omega-3 is found repeatedly to increase general cognitive ability (meaning higher processing power in your head) under many conditions. And you know you want more processing power as well as increased memory. Omega-3 fatty acids can usually be bought as pure capsules, or they can be found in salmon and walnuts if you want to eat that every day.
But be careful with the portioning. Too high Omega-3 dosages can lead to thinning of the blood, and while you might want to accelerate your Perl learning until your gum bleeds, you didn’t read that here.
And the last, but not least, point, is to be well rested. The code-all-night pattern might work when you are teenager or young adult and physically don’t need as much sleep. After that, it will effectively decrease your hard-won cognitive abilities.
Believe it or not, but I would actually like to expand on most of the points above and provide some more evidence and put some numbers to the claims above. But I have to post my Perl Ironman 10-day post and that effectively limits the time.
So, please share how you learn, or your advice to increase your learning rate. I know some rather able Perl-programmers drop by here occasionally. If you managed to become a guru, please share what you have done different. Is it just a lot of years of experience? Or do you have a unique approach?
From Twitter: @matseinarsen
September 28th, 2009 permalink
One of my own favourite, but generally least liked, genres in Psychology is Individual Differences Psychology. Instead of finding what people have in common, the ID approach focuses on what sets us apart. This is the area of IQ measurement and personality types, often with some visits into cognitive psychology, as well as teaching and – particularly – grading.
Related, I just came across this: The Programmers Competency Matrix - it’s the brainchild of blogger Sijin Joseph. It’s not overly scientific (he states he spent an afternoon on it), it makes no claim about predicting work-place skills or productivity, although it does imply a higher score means a better programmer.
It’s still a cool and thorough starting point to think about programming skill measurements.
Which I’ll do now. More to come..
From Twitter: @matseinarsen
September 27th, 2009 permalink
Basing software development decisions on research and controlled experiments currently has some challenges involved with it. One is that there is very little research available: In a survey of research literature, a set of researchers with the IEEE Computer Society found that in the decade from 1993 to 2002 only 103 scientific controlled experiments on software development was reported. In addition to that, a fair amount of these experiments has execution problems and often suffer from small sample sizes and non-significant results. Add that experiments often look into only a very small part of computer programming and the papers often take quite some time to read and digest, and it seems apparent why research and evidence-backing is so limited in the world of software engineering.
Meet SEED: The Software Engineering Evidence Database. This is a project from California Polytechnic trying to make empirical evidence-based research on software engineering easily accessible. Acknowledging that “software developers are known for adopting new technologies and practices based solely on their novelty, promise, or anecdotal evidence“, the university researchers have tried to put together a database of experiment summaries on topics of interest to software developers.
The database covers subject such as OO Metrics, Design Patterns, Testing and more, and the 200+ current summaries are written by graduate students and are provided with quality ratings. The goals of the project can be summarized as:
The concept of a community-driven Web database was proposed to engage Net generation students and softwareprofessionals with evidence-based software engineering. We deliberately chose the social networking approach of user-generated and reviewed content as a way to implement SEEDS since we thought that students would more easily relate to the course project and be more enthusiastic about it.
Also have a look at the project summary: “Engaging the net generation with evidence-based software engineering through a community-driven web database” by David S. Janzena and Jungwoo Ryoob.
From Twitter: @matseinarsen
September 24th, 2009 permalink
I’m trying to keep the level of “management” articles low on this blog, but this is quite funny and I wanted to comment on it. Last week Jeff Ello’s article The Unspoken Truth About Managing Geeks did it’s rounds on the Internet. Not surprisingly, with quotes like this:
[the IT world] is populated by people skilled in creative analysis and ordered reasoning. Doctors are a close parallel. The stakes may be higher in medicine, but the work in both fields requires a technical expertise that can’t be faked and a proficiency that can only be measured by qualified peers.
When hiring an IT pro, imagine you’re recruiting a doctor. And if you’re hiring a CIO, think of employing a chief of medicine
It’s not bad at all! Compare me with Dr. House, and I’ll send your article to my friends too!
Ello’s has some good points, though, and probably does a good job as a management consultant. Not because if his amazing insights about geeks, but because his observations can easily be distilled down and applied to anyone who has ever been managed, anywhere: Show people respect and they will like you and do what you want.
But, there is a second opinion. Looking for this I came across this article by Tim Bryce, who appears to hate those low-IQ programmers and their techno-babble, but still has choosen to work for 30 years with advicing people on managing them. Let’s say he takes the other side of the management consultant spectrum from Ello. Now he has some saucy quotes, for example his own “Bryce’s law”:
“There are very few true artists in computer programming,
most are just house painters.”
Whereas the knowledge of the language is vital to performing their job, programmers often use it to bamboozle others and heighten their own self-importance. To outsiders, programmers are viewed as a sort of inner-circle of magicians who speak a rather cryptic language aimed at impressing others, as well as themselves. Such verbosity may actually mask some serious character flaws in their personality.
and the rather inflammatory
Regardless of the image they wish to project, the average programmer does not have a higher IQ than any other worker with a college degree. In fact, they may even be lower.
and it goes on and on like this.
He has the occasional good point, however. Particulary, I find this to hold some truth:
I deliberately avoided the term “Software Engineer” because this would imply the use of a scientific method to programming. Regardless of how one feels about the profession, this is hardly the case.
Programming is often dominated by untested dogmatics, lack of empirical study, arguments-from-authority, opinions presented as facts, try-and-see approaches, voodoo programming and all sorts of other practices that would be counter-productive to, well, great engineering achievements. That’s the things I’m trying to fiddle around with in this blog, only with a focus on the parts that deals with the human side. Of all the concepts in programming, few have been thoroughly tested to see if they are actually as good ideas as they are put forward as. That doesn’t mean they are not, of course, only that we often only have convincing arguments to base decisions on.
If the above got to depressing, here’s another Ello quote to pick the mood up:
A good IT pro is trained in how to accomplish work; their skills are not necessarily limited to computing. In fact, the best business decision-makers I know are IT people who aren’t even managers.
Ok, that was probably the last post I’m going to do in a long time about management, even if it deals with people and programming. Other people write better about it (although perhaps not the two above).
Actually, feel invited to post any links to good blogs about programming managment below, because I don’t really know who these better writers are…
From Twitter: @matseinarsen
September 20th, 2009 permalink
As a programmer, you write for two very different kinds of readers. One is the rigid computer platform, the other is the human maintainers of your code. For the former we have quite conclusive guidelines on what works, but the latter is only consistent as a source of disagreement and uncertainty. Typically the guideline is “Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.“, while most people end up writing as if the person maintaining the code is themselves.
Neither approach is particularly good -I can’t imagine code written for violent psychopaths would really be that great to maintain. And on the other end, a lot of coders optimize for their own readability, and will argue the finer points of formatting from that point of view, not that of the actual readers – missing the point that readability is ultimately decided by the reader, not the writer.
I certainly catch myself doing this quite often.
In the Perl world, this is probably an even bigger issue than in other languages. There-Is-More-Than-One-Way-To-Do-It is still one of Perls big strenghts, but certainly not all the ways are equally good – which is why Perl Best Practices is now a staple of any Perl-wielding office. The contrasting Python, with it’s “there should be one — and preferably only one — obvious way to do it”, seems to be able to get away with 19 simple statements to define the pythonic best practice.
Now, anyone can have an opinion, particularly since research on code readability it still quite lacking. We just don’t know for certain yet what makes people get code. However, cognitive psychologists have been interested for several decades in how people generally organize large data structures in their brains, and some of this has given some neat practical applications for coding, such as Furnas’ paper on ‘Fish-eye’ views (1986). (This is basically about IDEs with collapsing code branches, but now you know why that’s nice, how to do it right and who thought of it first.)
Research on general language comprehension, however, is a massive field. In the ACM Journal of Computer Documentation‘s August 2000 issue, George R. Klare provides some clear-minded and research-based adviced for communicators that might be enlightening and is a good starting point for thinking about readability of normal text – that might apply to programming.
He specifies four purposes of readability:
- Reading speed and efficiency.
- Reader judgment.
- Comprehension, Learning and Retention.
Of these, only the last is of any big interest for programmers; you don’t typically care about the speed the maintainer needs to read your code. You might care about the Reader judgment if you code to impress your fellow programmers, but that is another chapter. Readership applies to how the size of the readership may be a function of the simplicity of the text – a consideration on the skill level of your maintainers, perhaps, but few people code with the intention of their code to be read by a large audience.
However, the biggest issue with text and code is comprehension. Even more so in programming code, as it is essential that the maintainer is able to create a precise representation of the code in his own mind. Also he must be able to understand both what it actually does and what the intention is, since these don’t always add up. For debugging purposes, it is actually completely essential that the reader can separate the soft human intentions from the hard computer operations, so any code must be understood on two levels simultaneously.
Now, how does readability affect comprehension? First, keep in mind that readability in natural language usually refers to choice of words and sentence length, and is typically measured by level of education necessary to read the text. This might not be appropriate for code readability – level of programming skill might not map to level of code understanding in the same way, and readability in code is often just a matter of indentation and syntactics.
But it probably maps somewhat close. This is where research is lacking again, so it is hard to tell.
Klare’s big advice, however, is that higher readability does not always convert into higher comprehension, but is modulated by situation and traits in the readers. And that is the important part. He describes four conditions when it does not work as you would think:
- A reader can understand at higher levels than expected if his motivation is high. Also, skill level is a fuzzy concept.
- If time is not limited, increase in readability might not make a difference on comprehension. The more time is limited, the larger the effect of readability
- The greater the readers background knowledge on the topic, the less effect does readability have. On the other hand, even an expert on one field may prefer higher levels of readability in texts outside his field.
- Type and level of motivation might affect comprehension.
or to quote his summary:
[..] more readable, written material is likely to produce greater comprehension, learning and retention than less readable only when one or more of the following factors are present: the less readable is much harder than the more readable, and clearly beyond the reader’s usual level; reading time is limited; the reader does not have a large amount of background or experience with the topic being covered; and, the reader has a relatively strong set to learn.
Now does this particulary increase understanding of readability of programming code?
However, if we think of readability not as just the reading of code as natural language, but rather of understanding of the semantic concepts, there is an interesting observation to be made. If we consider using more basic programming to make the program easier to understand, while avoiding more advanced concepts, Klare’s summary of readability shows something that also adds up with other research, namely that this approach to readability doesn’t always increase comprehension.
In a very recent piece of research using Scala, Gilles Dubouchet found that using more compact and advanced functional programming methods rather than basic, typical loop-constructs increased comprehension. Although one single piece of research such as Dubouchet’s is too limited to base decisions on, it becomes more interesting when it actually adds up with prior research on language comprehension.
Together it indicates that using more advanced methodology can increase comprehension for both original programmers and maintainers, unless they are pressed on time or motivation.
Unlike many other languages, Perl allows you to increase the complexity of your programming across methodology quite freely. You can start with the simple baby-Perl, go through procedural programming, add objects or start playing with functional approaches. You can use Aspect Oriented Programming or add on your own crazy, homespun programming methodology, if you so please. For comprehension and readability purposes, the above research indicates that if you consider your audience and their situation well, going for a higher and more advanced level might not be a disadvantage.
But do keep in mind that the research is still a bit patchy, and this is mostly an argument without empirical data. But I’ll make sure I report what I find, as this is just the first in many articles about readability…
From Twitter: @matseinarsen
September 17th, 2009 permalink
Another piece of old research. It’s so interesting, though, I can’t help putting a note up about it. In a piece of research released in 2000, Lutz Prechelt compared C, C++, Java, Perl, Python, Rexx, and Tcl. (It’s gotten a fair bit of attention before, so it’s not new material)
What’s so good is the fairly rigorous and natural approach – instead of leaning exclusively on local students or one company’s employees performing some unnatural task, the researchers solicited for solutions online for the scripting languages (Perl, Python, Rexx and Tcl) and got 80 different implementations of a simple dictionary task. With these tasks they ran a lot of metrics and generated solid, comparable empirical data. Read it for the details (there is also a prettified version available).
I liked this observation, however:
For all program aspects investigated, the performance variability due to different programmers (as described by the bad/good ratios) is on average about as large or even larger than the variability due to different languages.
It’s quite profound – and even if the study itself is a bit dated and has some minor flaws – this observation probably still holds regardless!
Also, for Perl programmers, this statistic is quite neat:
The variability in execution speed for Perl programs were far lower than for the other languages except Tcl. It might be over-interpreting, but it indicates Perl is a bit more predictable – or Perl programmers are – than most of the other languages. Also, speed-wise is holding up quite good, and even the higher end of the range of execution speed is better than the higher end for all other languages.
The curious thing is that this indicates that the variability between programmers matters less with Perl (and Tcl) than the other languages. Or at least speed-wise, but the low variability also seems to be a tendency in the other metrics in the paper.
What’s so interesting, though, is how this demonstrates in hard numbers the importance of human variables in programming. It’s almost like framework matters less than the people using it.. funny, eh?
Also see this page of language comparison links.
From Twitter: @matseinarsen
September 16th, 2009 permalink
15 years ago, the research journal Human-Computer Interaction published their special issue on Object-Oriented Programming. Having realized that a lot of the claims made about OOP at the time were not technical in nature, but rather were psychological and cognitive, the special issue attempted to present empirical and experimental research examining the claims about OOPs advantages on procedural programming. Or as the editor Bill Curtis notes:
Object-oriented (OO) design and programming trace their lineage to research on abstract data types in the late 1960s and early 1970s, but they did not become popular software development techniques until the late 1980s. In all this time there has been little serious empirical or experimental study of OO techniques. What usually passes for evaluation is either a testimonial from an industry pundit who may have developed a small application using an OO technique [..]
Even worse, this special issue of Human-Computer Interaction will be read by very few of the thousands of the people who read Computerworld, Information Week, Datamation, Software Development, and the other trade press periodicals in which OO methods are touted as often as explained. The results reported in this special issue are promising, but simultaneously they provide sobering expectations about the effort involved in obtaining the benefits of OO methods.
And little have changed since then – if anything the influence of Computerworld pundits and today’s bloggers are even larger than that of empirical and experimental research.
Just 15 years later, though, few of the journal issue’s articles have stood the test of time. Still, there are some anecdotes and pieces of information that are still interesting, if nothing else as a reminder of what people were thinking about OOP before it became mainstream, standard fare.
One quote from Curtis is still interesting for programmers changing from procedural to OO-style programming – as I guess most programmers actually still do while learning the ropes:
[..] professionals may require experience on as many as three 00 projects before they become proficient in these methods. The reputed advantages ultimately occur, but not during the early projects in which programmers are on the learning curve and have difficulty capitalizing on the capabilities offered by 00 methods.
Also, Herbsleb reports a few interesting facts from general cognitive psychology on how people generally understand objects, as part of a larger article on software engineering teams:
In careful experiments, Gentner (1981; Gentner & France, 1988) showed that, when people are asked to repair a simple sentence with an anomalous subject-verb combination, they almost always change the verb and leave the noun as it is, independent of their relative positions. This suggests that people take the noun (i.e. the object) as the basic reference point. Models based on objects may be superior to models based on other primitives, such as behaviours.
And object hierarchies..
Miller (1991) described how nouns and verbs differ in their cognitive organizational form. Nouns – and hence the concepts associated with them – tend to be organized into hierarchically structured taxonomies, with class inclusion and part-whole relations as the most common linkages. These are also, of course, the most common relations in OO representations. In human cognition, these hierarchies tend to be fairly deep for nouns – often six to seven layers. These hierarchies support a variety of important cognitive behaviours, including the inheritance of properties from superordinate classes. In contrast, verbs tend to be organized in very flat and bushy structures. This again suggest a central place for objects, in that building inheritance hierarchies will mirror the way humans represent natural categories only if the basic builiding blocks are objects rather than processes or behaviours.
And with the risk of going past what is acceptable amounts of quoting (but this is all closed research, so I have to quote it for you to read it):
[..] human understanding of hierarchies tends to be organized around basic-level classes (i.e., intermediate levels of abstraction that form an anchor point for human classification and reasoning). As described by Rosch (1978; Rosch, Mervis, Gray, Johnson, & Boyes-Braem, 1975), basic-level categories have large numbers of differentiating attributes, whereas, at leves both lower and higher, the differentiating attributes are very modest in number. [..] The tendency in generating class hierarchies for inheritance is to push attributes and behaviours as high as possible. But, to the extent that this is successful, it will lead to hierarchies radically different from those that both users and developers have naturally.
The psychology papers by Gentner and Rosch seems to be very well worth a read for anyone interested in how the brain does categorization “in the wild”. They are also freely accessible!
It might be forgotten, but originally a large argument in favour of OOP was improved knowledge sharing and better communication about program code between developers and with external forces. Now, you will more often hear arguments about code reuse, if OOP is ever even questioned. Better communication and knowledge sharing appears to be the main focus of the papers in special issue, but where they also mostly fail to find hard empirical evidence.
As I’ve mentioned before, code understanding and knowledge sharing should be a field of far more interest for developers. That it is hard to find solid empirical evidence to base advice on in that regard is a big, under-appreciated problem! All the examples from general cognitive psychology appears to provide direction for OO design, but without actual programming-specific experiments it is impossible to tell if that is the case.
And that was a blast from the past…
From Twitter: @matseinarsen
September 12th, 2009 permalink
Apparently, the distribution of marks in introductory courses to programming looks like this:
How come? Some people just get it and some don’t? A failure of teaching?
See the discussion at Mik’s blog.
I don’t necessarily agree that this is caused by people falling behind and not being able to catch up (although that is probably also a problem). If that was the case, you would probably see a skewed single peak distribution, as one of the commenters suggest.
Rather, I think the either/or explanation is the right. Now does that mean people in the left hump will never be able to learn, or are doomed to a life of poor understanding of programming? I think not, but how to move them to the ‘get it’ group is another matter..
From Twitter: @matseinarsen
September 9th, 2009 permalink
I promised to explain better the idea of variable roles I mentioned in the previous post about natural programming.
This is based on a finding by Finnish researcher Jorma Sajaniemi (published work), who discovered that 99% of variables in novice’s code can be categorized in to 11 different roles. These roles are variable uses every programmer will recognize: iterators, constants, flags and so on – although in role-terminology the names somewhat differ (steppers, fixed-values and one-way flags, for example). Mr. Sajaniemi has also found that these roles matches tacit knowledge in expert programmers – i.e. the 11 roles are also typically intuitively recognized by expert programmers even if it is not explicit or active knowledge.
Whether or not the same 11 variable roles are enough to categorize expert variable use seems harder to pin down, but there is a master’s thesis from their lab finding that they are sufficient. And, not surprisingly, in expert-written programs, the roles have a significantly different distribution. Judge for yourself, is this enough to describe your own variables?
||A data item that does not get a new proper value after its initialization
||A data item stepping through a systematic, predictable succession of values
||A data item holding the latest value encountered in going through a succession of unpredictable values, or simply the latest value obtained as input
||A data item holding the best or otherwise most appropriate value encountered so far
||A data item accumulating the effect of individual values
||A data item that gets its new value always from the old value of some other data item
||A two-valued data item that cannot get its initial value once the value has been changed
||A data item holding some value for a very short time only
||A data structure storing elements that can be rearranged
||A data structure storing elements that can be added and removed
||A data item traversing in a data structure
The list is from An introduction to the role of variables, but there is also a more extensive description available.
Sajaniemi’s aim with the research and role concept appears to be teaching programming. For Perl-programmers, it can be interesting to see an article about Teaching Python using Roles, which might tell on how interesting it is for teaching Perl. Otherwise, his research in variable roles seems to revolve around a Java world.
What I find most exciting with this is the approach to studying and extending programming. Instead of going the computer science route, it looks at how people program, identifies interesting patterns and puts forward numbers and testable hypotheses.
Now can this be used to actually extend programming and help expert programmers?
My first observation is that at least five of the variables seem to all play parts in very typical loop patterns, namely stepper, most-recent-holder, most-wanted-holder, gatherer and follower. If this is such a typical way of organizing code, perhaps language design can help this – or perhaps it already does in the immutable states and for-comprehensions of functional languages such as Scala, or the map, grep and higher-order functions in Perl. Or if nothing else it may explain why expert programmers often tend more towards those constructs (or ?).
But if we know these are the typical ways variables are used, how about implementing variable roles (instead of types) with special functionality that simplifies and enhances what they are used for: most-wanted-holders that triggers events, gatherers and followers with history, walkers with an implicit track, organizers and containers optimized for moving elements or not and so on.
But if that is a good idea is hard to tell. At least in Perl, some things can be patched on with a little magical module, so it would be simple to test. I’m playing around with it, and I’ll keep you updated if something meaningful comes out of it. If you know of any other language that implements something similar, please leave a comment!
From Twitter: @matseinarsen
September 7th, 2009 permalink
I just touched upon how natural programming can be a way forward for Perl in my previous post, and quickly saw a twitter from chromatic not understanding the “criticism (?)” of Perl 6. As he is a member of the Perl 6 development team, I am happy that he noticed my post, but not so happy about the lack of understanding. So a clarification is probably called for.
My post was not a criticism of Perl 6. I am quite skeptical and apprehensive about P6, which shone through in my article, but criticising a project that started in 2000 for not implementing ideas published in 2008 would be rather unfair. However, Perl seems to be in search of a purpose nowadays, with lessening interest and corresponding calls for better marketing. I wanted to present a large challenge to programming in general, and to show that this is an opportunity for Perl and the Perl community.
Particularly since I don’t think better marketing is such a good idea. I think a programming ecosystem that makes you go “wow, this is really going to make my programming great” markets itself.
So this was actually more picking up the challenge presented by chromatic himself to come up with a vision for Perl.
Now, that he didn’t understand that, I can’t help. But I am quite impressed by the Perl 6 development team that they actually pick up and notice the talk in the community that quick. That’s promising for the “community rewrite of Perl and the community” that Perl 6 development originally promised.
(Actually I just wanted to show a cool debugging tool but got carried away.)