71 useful articles on online behavior change you can choose not to read

From Twitter: @matseinarsen December 27th, 2013 § 0 comments § permalink

Some of the most interesting things happening in 2013 was around recommender engines. Amazon.com won an Emmy for their video recommender and Netflix algorithms got mainstream coverage with every mention of the movie Sharknado. Also, Arjan Haring presented some interesting thoughts about social proof in framing of recommendations.

Another interesting trend is more insights into the effects of user generated content like reviews and online comments. Sinan Aral demonstrated how reviewers are shaped by social contagion and the trend of shutting off comments is (thankfully) growing.

On the less exciting side, last year I hoped for more insights into online loyalty and stickiness this year, but little surfaced in that regards. The only thing I noticed was an article by Arie Goldshlager on predicting repeat customers, but even that was just referencing research from 2008. Maybe people just keep their cards too close to their chest on this.

Here’s the top material I found and tweeted in 2013 on everything related to online behavior, from conversion optimization and psychology to recommender systems, data science and AB-testing.

 

Online behavior change

“When you want to motivate someone to exercise regularly, a first push up is a great start! The same goes when you want to sell products.”

Maximizing conversion with micro persuasion
(Arjan Haring, Econsultancy.com)


Why We Overestimate Technology and Underestimate the Power of Words
(Arjan Haring, Copyblogger)

7 Principles From 7 Years Of Landing Pages
(Scott Brinker, Search Engine Land)

5 Dangerous Conversion Optimization Myths
(Linda Bustos, GetElastic)

The One (Really Easy) Persuasion Technique Everyone Should Know
(Jeremy Dean, PsyBlog)

The Recipe for a Perfect Landing Page
(Amy Hardingson, Yahoo)

How to Know When You’ve Done Too Much Conversion Optimization
(Chris Goward, Wider Funnel)

How to Use Personalized Content and Behavioral Targeting For Improved Conversions
(Ott Niggulis, ConversionXL)

Nine conversion techniques from the 1920s to try today
(Dave Gowans, Econsultancy.com)

Persuasive Psychology for Interactive Design
(Brian Cugelman)

URLs are for People, not Computers
(Andreas Bonini, Not Implemented)

5 Principles of Persuasive Web Design
(Peep Laja, ConversionXL)

Use of Pricing Tables in Web Design – Starkly Comparison
(Nataly Birch, designmodo)

32 UX Myths
(Zoltán Gócza and Zoltán Kollin)

 

Social Media

“we have mapped the brain regions associated with ideas that are likely to be contagious”

How the brain creates the ‘buzz’ that helps ideas spread
(Stuart Wolpert, UCLA Newsroom)


Personality, Gender, and Age in the Language of Social Media: The Open-Vocabulary Approach
(Schwartz et al., PLOS ONE)

Your casual acquaintances on Twitter are better than your friends on Facebook
(Clive Thompson)

How To Get People To Do Stuff #5: What makes things go viral?
(Susan Weinschenk, The Brain Lady Blog)

LinkedIn Endorsements: Reputation, Virality, and Social Tagging
(Sam Shah, LinkedIn)

So you think you can go viral? Three reasons you may be kidding yourself!
(Sangeet Paul Choudary, Platform Thinking)

Do you fear you are missing out?
(Phys.org)

Measuring International Mobility Through Where People Log In to Commonly Used Websites
(David McKenzie, blogs.worldbank.org)

 

Reviews and comments

“someone invented ‘reader comments’ and paradise was lost.”

This Story Stinks
(Dominique Brossard and Dietram A. Scheufele, New York Times)

The real reason for rotten online reviews on TripAdvisor (Rory Sutherland, The Spectator)

“Positive comments tended to attract birds of a feather”
(Tim Harford, the undercover economist)

The pitfalls of crowdsourcing: online ratings vulnerable to bias
(Carolyn Y. Johnson, Boston.com)

The Problem With Online Ratings
(Sinan Aral, MIT)

 

Offline Behavior

“Is Starbucks missing out on millions of dollars in revenue because its coffee prices are too low?”

Is Starbucks coffee too cheap?
(Roger Dooley, Forbes.com)

You looking at me? Making eye contact might get you punched in the face (John Ericson, Newsweek)

Top 10 bargaining tricks in China
(“judaicaman”, eBay buying guides)

10 Dirty Negotiation Tactics and How to Beat Them
(Barry Moltz, Open Forum)

Drinking with your eyes: How wine labels trick us into buying
(Michaleen Doucleff, The Salt/NPR)

Slot machines: a lose lose situation
(Tom Vanderbilt, The Guardian)

How Memories of Experience Influence Behavior
(Peter Noel Murray, PsychologyToday)

No windows, one exit, free drinks: Building a crowdsourcing project with casino-driven design
(Al Shaw, Nieman Journalism Lab)

The Psychology of Effective Workout Music
(Ferris Jabr, Scientific American)

Restaurant menu psychology: tricks to make us order more
(Amy Fleming, The Guardian)

 

Dark Patterns

There’s an entire industry of exploitation that relies on fear and shame as motivators for business.
What Fear-Based Business Models Teach Us About User Motivation (Max Ogles, FastCompany)

What happens when you actually click on one of those “One Weird Trick” ads? (Alex Kaufman, Slate)


How to Instill False Memories
(Steven Ross Pomeroy, Scientific American)

The psychology experiment that involved real beheadings (Esther Inglis-Arkell, io9)


If you text a lot, you are probably also racist and shallow
(Annalee Newitz, io9)

 

Recommender Systems


Are your recommendations any good?
(Mark Levy, Data Science in Action)

The Science Behind the Netflix Algorithms That Decide What You’ll Watch Next
(Tom Vanderbilt, Wired)

Why There Are So Many Terrible Movies on Netflix
(Meghan Neal, Vice)

Shit Recommendation Engines Say
(Lukas Vermeer)

Why You Should Not Build a Recommendation Engine
(Valerie Coffman, Data Community DC)

Online Controlled Experiments at Large Scale
(Kohavi et al., KDD 2013)

Recommender systems: from algorithms to user experience
(Joseph A. Konstan and John Riedl)

 

Data Science

“Robert McNamara epitomizes the hyper-rational executive led astray by numbers.”
The Dictatorship of Data
(Kenneth Cukier and Viktor Mayer-Schönberger, MIT Technology Review)


WTF Visualizations: data science


What Does It Really Matter If Companies Are Tracking Us Online?
(Rebecca J. Rosen, The Atlantic)

16 useless infographics
(Mona Chalabi, The Guardian)

Why you should never trust a data scientist
(Pete Warden)

Statistics Done Wrong
(Alex Reinhart)

The Potential and the Risks of Data Science
(Steve Lohr, New York Times)

Data Science: For Fun and Profit
(Lukas Vermeer)

Seven dirty secrets of data visualisation
(Nate Agrin and Nick Rabinowitz, Creative Bloq)

5 ways big data is going to blow your mind and change your world
(Derrick Harris, Gigaom)

‘Neuromarketing’: can science predict what we’ll buy?
(Alex Hannaford, The Telegraph)

Most data isn’t “big,” and businesses are wasting money pretending it is
(Christopher Mims, Quartz)

DARPA envisions the future of machine learning
(Phys.org)

Obama Campaign Misjudged Mac Users Based On Orbitz’s Experience, Says Chief Data Scientist

(Kashmir Hill, Forbes)

 

Online Experimentation

“When running online experiments, getting numbers is easy; getting numbers you can trust is hard.”

Online Experiments: Practical Lessons
(Ron Kohavi, Roger Longbotham, and Toby Walker, Microsoft)


The do’s and don’ts in A/B testing
(Floor Drees, Usersnap)

Research Practices That Can Prevent an Inflation of False-Positive Rates.
(Murayama K, et al.)

Effective Web Experimentation as a Homo Narrans
(Dan McKinley)

Theory-testing in psychology and physics: a methodological paradox
(Paul E. Meehl, Philosophy of Science)

The Nuremberg Code for human experimentation


Is your A/B testing effort just chasing statistical ghosts?
(Mats Stafseng Einarsen, Booking.com)

Split-testing 101: A quick-start guide to conversion rate optimization
(Conversion Rate Experts)

False positives and false negatives in predicting customer lifetime value
(ariegoldshlager)

 

Let me point out that if you follow me on twitter you are less likely to miss out! And if you share this article you will look smart. I promise!

3 hugely useful concepts for development team management

From Twitter: @matseinarsen August 18th, 2013 § 1 comment § permalink

If you want to achieve anything bigger than yourself, you need others to play along. Even if people management isn’t your calling, knowing how to lead people is a hugely useful skill for anyone who wants to achieve something. Having tried, failed and succeeded in leading development teams in various ways, I want to share these three concepts that has been very useful.

I’ve always been on lookout for literature on leadership that fits 3 simple criteria: 1) Not based in ideology. 2) Some backing in data and research. 3) Directly applicable, not grand ideas or personal development plans. So far I’ve found three great concepts to master that I’ve found very useful: the SCARF model, Level 5 leadership and High Performance Teams.  There’s a lot of questionable material around these concepts online, so I’ve tried to pick a few articles as close to the original sources as possible.

The SCARF model: Before you can really work well together with anyone, you have to make them want to approach work rather than avoid it. It’s really that simple. The SCARF model tries to create a model based on very basic psychological principles of what is important to people to want to participate in something. It’s Status, Certainty, Autonomy, Relatedness and Fairness.  This is a very good article that I recommend to everyone:

 

Level 5 Leadership basically teaches humility with resolve. I’m not sure where it’s best to start, but here’s a few links. If you want to learn more, read Jim Collins’ book “From Good To Great”.

 

High Performance Teams is a very simple concept. It’s just a question of helping your team find the right way of working together, and gives you some specific elements to look for in your team:

 

There is so much more, of course.  What these articles can help you with is to start thinking along the 3 most important lines in people management: motivation and engagement, how to lead while listening and how to create a system in which your team can succeed.  Enjoy.

 

A do-it-yourself (Dutch) language course

From Twitter: @matseinarsen January 3rd, 2013 § 5 comments § permalink

I’m just going to leave this here.  When moving to Amsterdam, I put together a quick scraping and spidering script to get a list of the most frequently used words in Dutch to practice learning Dutch and building my vocabulary.  The thinking being that by using the most high frequent words, I would learn the language in a demand driven fashion – and learn what matters first!  It’s got a few bells and whistles like Google translation links and context examples.  I figure I’d share it, since it comes up once in a while.

The Perl code as a Github gist.

Here’s the top 100 words in Dutch.  Run the script to get a larger sample.

I want to develop it further by creating a flash-card generator and make it auto-generate quizzes, since both those methods are known to be good for language learning. Also it would be nice to have it remember the words you’ve seen and learnt.

Also, disclaimer: I’ve lived over 3 years in the Netherlands and my Dutch is awful, so this approach to language learning does so far not have a good track record.

 

Mind hacks, recommendations and behavioral heuristics: 2012′s top articles on online consumer behavior

From Twitter: @matseinarsen December 22nd, 2012 § 0 comments § permalink

Understanding consumer psychology and online behavior has become essential and mainstream knowledge for e-commerce development in 2012.  While some of us might regret that the cat is out of the bag, it also means a lot of smart people are figuring out a lot of smart things.  Below are what I found to be the most insightful and actionable articles in 2012.

There’s two things mostly missing: Recommender systems are still exclusively the domain of data crunching and algorithms, while I’d like to see more on inspiration and getting people out of the filter bubble. The other thing I haven’t seen much of in 2012 is any interesting work on stickiness and loyalty.

If you follow my twitter feed, you’ve seen almost all of these articles. If not, now you know why you should follow me, and I’d really love you if you do!

 

Persuasion

9 Things to Know About Influencing Purchasing Decisions (ConversionXL)

Persuade with Pictures (Neuromarketing)

Secrets from The Science of Persuasion (by Robert Cialdini & Steve Martin)

A – Z of persuasion (Richard Sedley, Loopstatic)

50 Ways To Seduce Your Web Visitors With Persuasive Landing Pages (Kissmetrics)

“Self-Efficacy” = a highly competent persuasion technique (+5 conversion tips!) (Bart Schutz, Online Persuasion)

“Autonomy”: a Super Persuasive Technique (+ 5 conversion tips!) (Bart Schutz, Online Persuasion)

Nine valuable techniques to persuade visitors to buy in 2012 (Paul Rouke, Econsultancy)

Lings Cars and the art of persuading visitors to buy (Paul Rouke, Econsultancy)

 

The Hard Sell

Testing

From AB tests to MAB tests (talk by John Myles White)

Three reasons to stop A/B testing (Maurits Kaptein on Econsultancy)

Experimenting at Scale (Josh Wills, Google)

Trustworthy Online Controlled Experiments: Five Puzzling Outcomes Explained (Kohavi, Deng, Frasca, Longbotam, Walker & Xu, Microsoft)


Recommendations

Spotify solves discovery by discovering music ain’t so social after all (Robert Andrews on paidContent)

ACM Recommender Systems 2012: Most discussed, tweeted papers & presentations #RecSys2012. Blog reviews. Datasets. Social Graph. Links (Data Science London)

Evaluating the effectiveness of explanations for recommender systems (Tintarev & Masthoff, User Modeling and User-Adapted Interaction)

Social behaviour

Consumers’ ‘herding Instinct’ Turns On and Off, Facebook Study Shows (Science Daily)

Dark Social: We Have the Whole History of the Web Wrong (The Atlantic)

5 Design Tricks Facebook Uses To Affect Your Privacy Decisions (Avi Charkham, TechCrunch)

NYU Stern Professors Develop New Method to Measure Influence and Susceptibility in Social Networks (Sinan Aral, NYU)

Social contagion: What do we really know? (Duncan J. Watts, PopTech!)

Creating Effective Loyalty Programs Knowing What (Wo-)Men Want (Valentina Melnyk, UVT)

 

Some Offline Learnings

Buy Design: Meet Paco Underhill, retail anthropologist (Metafilter post)

Bizarre Insights From Big Data (New York Times)

The Touch-point Collective: Crowd Contouring on the Casino Floor (Natasha Dow Schüll,Limn)

 

Design & Other Mind Games

Hacking the brain for fun and profit (Mind Hacks)

61 Behavioral Biases That Screw Up How You Think (Aimee Groth, Gus Lubin & Shlomo Sprung, Business Insider)

If… (Introducing behavioural heuristics) (Dan, Design with Intent)

 

A Tripline map: Morocco, Spain, Portugal

From Twitter: @matseinarsen July 2nd, 2012 § 1 comment § permalink

Tripline let’s you create trips on maps.  I just have to test how it works to embed their maps, so this blog has a little map tripline of the trip I did of Morocco, Spain and Portugal with @angelarhodes in April/May. Check it out – it’s a cool little thing to play with.

Angela’s blogpost: Morocco – Assault on the Senses.

 

 

Is your A/B testing effort just chasing statistical ghosts?

From Twitter: @matseinarsen June 17th, 2012 § 4 comments § permalink

I’ve always felt that the idea of repeated significance testing error and false positive rates is a bit of a pedantic academic exercise.  And I’m not the only one, some A/B frameworks let you automatically stop or conclude at the moment of significance, and there’s is blessed little discussion of false positive rates online. For anyone running A/B tests it’s also little incentive to control your false positives. Why make it harder for yourself to show successful changes, just to meet some standard no-one cares about anyways?

It’s not that easy. Because it actually matters, and matters a lot if you care about your A/B experiments, and not the least about what you learn from them. Evan Miller has written a thorough article on the subject in How Not To Run An A/B Test, but it’s quite too advanced to illustrate the effect very well. To demonstrate how much it matters, I’ve ran a simulation of how much impact you should expect repeat testing errors to have on your success rate.

Here’s how the simulation works:

  • It runs 1.000 experiments, each with 200.000 fake participants divided randomly into two experiment variants.
  • The conversion rate is 3% in both variants.
  • Each individual “participant” gets randomly assigned to a variant and either the “hit” or “miss” group based on the conversion rate.
  • After each participant, a g-test type significance test is run, testing if the distribution is different between the two variants.
  • I then count every occasion where an experiment did hit significance at 90% and 95% probability, then count every experiment that did reach significance at any point.
  • As the g-test doesn’t like low numbers, I didn’t check significance during first 1.000 participants in each experiment.
  • You can download the script and alter the variables to fit your metrics.

So what’s the outcome?  Keep in mind that these are 1.000 controlled experiment where it’s known that there are no difference between the variants.

  • 771 experiments out of 1.000 reached 90% significance at some point
  • 531 experiments out of 1.000 reached 95% significance at some point

This means if you’ve run 1.000 experiments and didn’t control for repeat testing error in any way, a rate of successful positive experiments up to 25% might be explained by a false positive rate. But you’ll see a temporary significant effect in around half of your experiments!

Fortunately, there’s an easy fix. Select your sample size or decision point in advance, and make your decision then. These are the false error rates when making the decision only at the end of the experiment:

  • 100 experiments out of 1.000 were significant at 90%
  • 51 experiments out of 1.000 were significant at 95%

So you still get a false positive rate you should not ignore, but nowhere near as serious as when you don’t control correctly. And this is what you should expect when running with significance levels like this – this is actually the probability level of 95% you would expect, and at this point you can talk about real hypothesis testing.

 

 

Amazon recommendations

From Twitter: @matseinarsen October 30th, 2011 § 0 comments § permalink

It’s almost 10 years old, but this is an excellent article from Greg Linden, Brent Smith and Jeremy York on how Amazon.com does product recommendations:  Amazon.com Recommendations – Item-to-Item Collaborative Filtering


Going to OSCON?

From Twitter: @matseinarsen July 24th, 2011 § 0 comments § permalink

Interested in discussing psychology and software development? I’m at the OSCON in Portland, Oregon all week this and I would be really interested to chat with others interested in psychology.

I’m mainly at the conference to help hiring for Booking.com so come and ask for me at the Booking.com stand in the Expo hall.

And if you’re interested in any of the many positions we’re looking to fill also drop by, of course! Have a look at our available openings at the Booking.com jobs portal. We’re still trying to get hold of many, many experienced Perl developers, and we’re also willing to teach highly experienced developers in other languages Perl.

What makes a superstar developer?

From Twitter: @matseinarsen June 22nd, 2011 § 0 comments § permalink

A funny discussion is going on at HBR Blogs:  Management-type blogger Bill Taylor suggests our culture wrongly celebrates the super-stars, and claims great people are overrated, on the cost of well-functioning teams (via Igor Sutton). But, to illustrate his example he uses software engineers as an example. Cue outpouring of frustration – Bill’s getting hammered in the comment section.

So what’s the problem?

First, try to look past that Bill skipped 30 years of research and experience in software engineering and stamps into it like a PHB-cliche, seemingly assuming his opinion is as valid as any research on the subject. That alone probably ticked off the defensive reaction in any software developer accidentally stumbling into the Harvard Business Review blog section.

The main misunderstanding is the assumption that there is only one type of talent. And both Taylor and a fair amount of the commenters makes this mistake and applies their experience with basic bell-curve measured skills onto a type of talent that is ruled by other laws.  Nassim Nicholas Taleb discusses this distinction in great detail in The Black Swanthe environments of “mediocristan” and “extremistan”. The first which you can understand using the bell curve and gaussian distribution, the latter where differences are of an order of magnitude and are qualitative or disruptive differences.

For example, in most manual labour or production line work, a practitioner can get good or even excellent, but mostly within a range that can be measured safely with standard deviations and the differences between average and excellent performance does not differ in it’s nature, but it’s output. Here throwing more people at the problem can solve it – maybe your top salesman makes ten sales a week while your average makes 4. Well, throw in 3 averages and you’re just as good. That’s not necessarily good economy or a good idea, but it can get you where you want.

In contrast, in the world of “extremistan”, a difference in skill can be of such a magnitude that it makes a qualitative difference. In the comment section of Taylor’s article, someone asks, “Would you want a Shakespeare or 100 Bill Taylor’s?”, and countless variations on the theme. And software engineering is that sort of talent. A “superstar developer” doesn’t necessarily a programming Shakespeare make, but he or she can make something a lesser qualified individual can never do, or do it so fast it makes the difference between staying in competition or not. Or just connect the dots and save the day.

Throw in the special problem of software engineering that putting one more person on a task tends to double the time it takes to solve it, and the effect is even larger.

But then, the Bill Taylor’s take their experiences with the “mediocristan” type of talent and applies it on the very different world of software engineering. It comes out, as is well pointed out in the comment section, as commoditization of something that can not be commoditized. Software development can’t be reduced to the number of lines written per day, in any meaningful way. Even business people who really, really want to think about it in that way, will still be wrong. It has been discussed and demonstrated countless times. The classic “The Mythical Man-Month” explains how you can’t just think of your software engineers as burger flippers.

When the original blog post received so much vitriol, it comes from every software engineer in the world’s experience with clueless managers who approach development in the Bill Taylor way. Today’s successful companies are ran in a different way, by Zuckerberg’s and Steve Jobs’ who are playing in a the-winner-takes-it-all world of the Internet. It doesn’t take that many, as long as you have the right people – Facebook has 2000 employees serving over 600 million users.

So what makes a super star developer?

I believe in the old truism that the difference between developers can be of the 100x magnitude – I actually think the Net Negative Production Programmer is a reality, so accordingly a top developer is infinitely better than the worst… However, unless you think the superstars have their skills handed down to them by divine intervention, something must have brought them to this. Here’s some points I believe are what makes up the super star:

  • It’s knowing the codebase well. The superstar is often the guy who knows the codebase to the last semi-colon. It’s not about being the smartest guy in the room, but just knowing exactly where to hack in that little change, and that little change, and that little change, and that little change…. before lunch.
  • It’s content knowledge. The superstar is the guy who also knows the content of what he is coding really well. If you’re making a chess bot, the developer who is also a chess grandmaster knows all the elements, the edge cases and the purpose of what he is trying to do. He will have a working prototype running while the developer without any chess background is still trying to understand all the implicit assumptions in the instructions he got from his Scrum Product Owner. I think this one is essential, and so often overlooked. Knowing the thing you work with, and working with what you find interesting is not only making development work into a whole other ballgame, it’s also does wonders for motivation.
  • It’s situational. You can’t throw Linus Torvalds, Larry Wall or Bill Joy into your hack app shop and expect to have the next Angry Birds in a month. You need the right person and the right setting.
  • It’s knowing programming well. The superstar developer doesn’t have to be a superstar in the inner working of his programming language – but he knows it well enough that it’s not in the way of him reaching his ultimate goal.
  • It’s practice, practice, practice. Yeah exactly! No one is born into superstardom. This is written about a lot, and Malcolm Gladwell’s thesis of the 10.000 hours of practice rule really applies the software engineering too. One complicating factor with this point is, as I wrote about in accelerating your Perl learning, that most developers are on a lifelong learning mission anyways. They (we…) always look to learn something new, so what puts the superstar apart from the average? It’s not that easy to pick out, but in his article about the 10.000 hours of practice, Gladwell touches upon the differences of training and learning. It’s a large field of research and I hope to post more about it later.
  • It’s a high level of intelligence. But not necessarily an extreme level of intelligence. However, a certain level of numeracy and ability of thinking in abstractions is necessary.  Maybe at some point we’ll find that super developers are able to hold more variables at the same time in their working memory, or something along those lines, and that turns out to explain the differences. But I have yet to see any research like that.

And it’s motivation… and the right management… but these things are also things external to the superstar, that you one move around to find. The brain finds it’s ideal setting..

So there, that’s what put the superstars apart from the average. I’d love to hear others’ take on it.

The sad thing is…

..that reality is that a well functioning team can usually match the lone superstar or even has it’s own advantages. It’s actually a good message: let’s not just celebrate these few people, they’re brought forward on the shoulders of people going through the daily grind of facilitating it and cleaning up the mess. Or we can achieve great things just by working together. It’s just presented so boneheadedly wrong.

Finally, Bill followed up with a clarification of some sorts, basically that IBM is made up of average people and see how long they have lasted, while Enron was full of superstars, and look how they crashed.  I wonder what IBM thinks of his base assumption there.

Can you detect user emotion with only mouse movements?

From Twitter: @matseinarsen June 7th, 2011 § 0 comments § permalink

Trying to learn more about how emotion affects ecommerce, I came across the book “eMotion: Estimation of User’s Emotional State by Mouse Motions” by Wolfgang Maehr.  Basically, Wolfgang Maehr found that you can correlate certain types of mouse movements with emotional states.  Specifically, he found that mouse acceleration, deceleration, speed and uniformity could predict arousal, disgust/delight, and anger/contendedness, all in a sample of 39 participants.

But… how is this not available to me in a handy javascript library?   I am just dreaming of reading off the emotional state of website visitors per page.  Or per blog post for that matter…

If you know of anyone who has made any implementation of something like this, please please leave a comment!

 

Full research paper with numbers here: eMotionEstimation of the User’s Emotional State by Mouse Motions.