Tuesday, August 15, 2017

Mindmap retrospective

One common way I do a quick retrospective after exercises to "burn the fuel of experience" is a observation retro. I first learned to do these via post-it notes and then grouping them into clusters based on similarity. I still like and do this method a lot but find myself doing a mindmap version very often because of the ease and iterative nature of it. This is a short write up on how to do one.

Mindmap from Agile2017 session "The ROI of Learning Hour"

  1. Open a mind map
    (I use mindmup )
  2. Label middle (blue) node
  3. Collect observations from the audience 
  4. Add structure as needed

Collecting Observations


This is pretty simple. Ask for observations. When someone shouts them out add them to the mind map. It's ok to rephrase them, try to get them as short as possible (1-2 words) but if you can't add a whole sentence if needed. For example someone might say "the chart with urgent things getting in the way of important things" for which I would type up as "Important vs urgent"
Another side note is to ask for "observations" rather than "learnings". This might seem small but it can make a large difference to the amount of feedback you get. Learnings can be intimidating and makes it seem like there are right and wrong answers.

Adding Structure

Anytime I saw 2 or more concepts that had a similar base or extended an idea I would add that node and move around the map. I highlighted these examples in yellow above. This does a couple of things
  1. Calls out abstractions
  2. Triggers more observations
You might notice I also added "thresholds" even though there was only 1 idea under it. Or that I didn't add small changes over time, but did extend the ideas of 300 pushups, micro habits & change blindness to it. 

Abstractions also trigger variations. If we are looking at this blog post and someone points out the Labels, I could abstract it to fonts. In which case they might also point out the bold or normal fonts. But I could also abstract it to "Formatting' in which case I might get color (black, blue), Numeric lists, tabs, images and text justification. 
Either way more of the experiences is being inspected.

This process of adding structure to the observations is an interesting way of facilitating. Sort of reminds me of 'training from the back of the room' (although I am clearly at the front of the room during this)


Sunday, July 9, 2017

On Investing

A while back I started really looking into the math of compound interest. I even made a video about it. All of this got me thinking about my own financial investments. While I've always been good at saving I've never been that good about investing so I did a little reading and then ran an experiment: I took my money, divided it into thirds and tried out 3 investment ideas. I also setup a calendar alert for 1 year in the future to review the results. Today that calendar alert went off, here are the results.

Betterment

Betterment is a robot trader. The idea is to be like an index fund but a bit better. My results where the opposite. It was like an index fund but a bit worse. Still this isn't to say it was bad, just the that it always lagged a bit behind the index fund I bought.

Results: 11.4%


S&P 500 Index Fund

Vanguard's S&P 500 index was a solid choice. Like Betterment it also has extremely low fee's and consistently gave slightly better results.

While it seemed almost the same, I would like to point out that results vary with compounding and the 1.5% difference over 40 years would add up. Let's take $1,000 for 40 years
At 11.5% = $77,800
At 13% = $132,781
So 70% better over 40 years.


Results: 13%

Stocks

The remainder I split equally into 5 companies. This requires a fair amount of explanation, so I'll start with the basics and go into detail afterwards. The main takeaway here is that it's very volatile with swings as much as 10% in a given week. Compared to either betterment or vanguard this is a rather extreme change. I also feel there is just straight out a bunch of luck involved and that I might regret this at any moment. However, so far it's been the best investment of the three.

Results: 32.6%

Stocks - My rationale/rationalization

I had a fairly simple investment philosophy: Invest long term in companies with smart people doing smart things.

As such I  bought 5 companies:
Facebook - Impressed by developer culture, CI practices & the hiring of Kent Beck
Google - Impressed by 20% time, Go, Kubernetes, AI and culture continuously refined by Larry Page
Amazon - Impressed by microservices, continuous focus on market growth, and AWS
Netflix - Impressed by Devops, open source, pivots and team cultures.
Tesla  - Impressed by the products and CI in cars (actually know very little about the company inside)

I only bought once. Didn't do any day trading. Didn't do any financial investigation. This might seem a bit irresponsible, but my theory is that it's all a bit of gamble and I'm more likely to over value my understanding than gain real insight. I'm also not doing any market analyst of how the companies 'fit' into the bigger market. I'm simply trusting that smart people doing smart things is going to win.

I would also like to state that I think I might have just gotten lucky. I think it's easy to fall prey to survival bias and assume that success is somehow predestine.

The stocks make me a bit nervous, but I also realize that they have a much larger potential to generate real wealth; $1,000 for 40 years at 32.6% = $79,751,886

Monday, March 13, 2017

Why we did a speed meet at our conference and why you should too!


At European Testing Conference 2017, we had a full session devoted to a speed meet.

What is a speed meet? At its most basic it's talking to someone new for 5 minutes, then rotating and doing it again 9 times.

Here's what it looked like:


Mind Maps: What do you talk about?

Of course this raises the question of what to talk about? To solve this, we took a suggestion from  Jurgen Appelo and had everyone make a small mindmap about themselves. When you sat down you handed your map to the other person. Therre is a lot of information between the 2 mind maps and people would easily find something that they were interested in. And this is the rather amazing thing about geeks; 

5 minutes to make small talk is terrifying for geeks.
Given a topic they care about 5 hours isn't enough time

Here's an example of one of the participants mind maps:

Why do this?

Conferences are amazing places. It's a great opportunity to mix and talk with many people you wouldn't normally get a chance to interact with. However, if you are new to a conference this can be a overwhelming and terrifying prospect. While most people are friendly after you meet them, strangers never seem that way. We wanted to make it easier to have a good 'hallway track'. After talking to 9 people everyone had at found at least 1 person they liked.  The conference became a lot more friendly. We also heard more things like:

"Kara! Have you met Matt?"


Lunch

Lunch time can be especially uncomfortable if you don't know anyone at a conference. Finding a place when every table is full of strangers already talking? Often we can just try to find a place to hide away and eat quietly. This is why we did the speed met in the morning the first day of the conference. Lunch was right afterwards, and it was nice to know at least 1 person to eat with.
lunch should be friendly, not scary

Details:

Just do it

Structure and lack of choice is your friend here. Notice that while we normally had 3 tracks, we only had 1 during the speed meet. We didn't want to encourage people to skip it. We also spoke to the speakers to encourage them to participate. It can be a special treat for a newbie to get a chance to speak 1 on 1 with a presenter. 
We also didn't do it as an 'optional' morning session. These sessions usually have a very low percentage of the conference attending. For example, many conferences have a lean coffee morning session. But, for a 1000 person conference it isn't unusual to have 20-30 people at these. 


Homework

We gave multiple chances to create the mind maps beforehand 

  • Emailed the day before conference
  • Mentioned at Speakers dinner
  • Mentioned in opening slides for the conference
Nonetheless, there are still a bunch of people that put theirs together as the sessions started. That's ok, it's meant to be quick and easy. We provided lots of paper and pens.

Rotations

I highly suggest a few (4-5) practice rounds of moving 1 seat to the left. It's amazing that if you wait for the seat next to you to become empty ( X 150 people) this can take a few minutes to move people. If everyone stands, moves & sits it takes 3 seconds.  

Early

This sets the tone for the conference. Do it early, not at the end of the conference.



How did it work out?

Excellence! It can be hard to judge the effectiveness of an activity. We do a retrospective and we got many notes about liking it, but is liking it the same as it being good? Maybe they just remember it because it was different?
I had a bit of an advantage as this is the second year for ETC and we could compare it to last year. I also have all the other conference I attend to compare it with.
However, the biggest indicator for me was the party the first night. While it's hard to articulate, it just felt friendlier. People moved between tables more, talked more. The whole atmosphere felt warmer. 

10 / 10 Would repeat!


Wednesday, February 1, 2017

Thoughts on conference design

Next week is the European Testing Conference. 
We do a lot of things to make this conference better 


European Testing Conference
Feb  9th & 10th  (Pre-conference Trainings on the 8th)
Helsinki, Finland
25% off discount code: FRIENDSOFLLEW

Here are some of the things we do to make a better conference for the attendees:

1)   Facilitate meeting other people
We all know that one of the best parts of conferences is the people you meet but it can be hard to strike up a conversation with complete strangers. 
Knowing this, we set up 2 events structured to introduce you to new people.
a)    Speed Meeting:
This occurs at 11:00 the first day and is the only session at that time. The whole conference sits and talks to a new person for 5 minutes. Then rotates and does it again. 45 minutes later, you have talked to 9 new people. Sometimes that is enough to help you find the right person, sometimes it’s one of friends of those 12 that you get introduced to. Either way the conference becomes a lot friendlier afterwards.
b)   Facilitated Discussion
Later that day we will do on other 45 minutes of round table discussions (8 people per table). This will follow the lean coffee format and allows people to talk about the subjects they are interested in with each other. It is also a chance to speak with the speakers your are interested in, as each speaker will facilitate one table.

2)   Workshops
One of the challenging things about workshops is it’s hard to actually go to them when there is the easier choice of just listening to a talk. The lazy part of us wins out so much of the time despite our best intentions. Know this, when never run the workshops sessions at the same time as normal lecture sessions. So you don’t have to decide *if* you do some hands on learning, you only have to decide which one you want to do.

3)   Hallway track
Meeting new people and doing hands on learning has a way of stirring up ideas. Many experienced conference goers talk about the ‘Hallway Track’ as valuable part of conferences but new people often are let out on this aspect. Knowing this, at 14:15 on the last day, we set aside 3 sessions of open space, where you can announce the topics you’re interested in and then hold mini-sessions with like minded people. This helps in ensure everybody gets the most out of the new ideas they have had.

We hope you agree that these steps help to create a better conference and hope to see you there!

We also do a lot to make it better for the speakers, but that's another blog post.



Monday, January 9, 2017

Is there a perfect API Design?

Part I:The Problem


I write a verification framework called ApprovalTests. It uses itself to test itself and is generally bug free. So when exploratory tester extraordinaire Maaret Pyhäjärvi wanted to use it as a test target we were both rather excited. I was excited to show off how good automated testing and TDD can be. She was excited to show off how much it still failed to cover.

She won.

To be clear, the stuff I tested was pretty solid. However, I was woefully incomplete on the system as a whole. A mere hour of testing discovered gaping holes in different environments, documentation and onboarding of new users, and usability issues with my API.

These are hard problems, many of them still haven't been solved and I wanted to talk about one in particular that I am still struggling with today:

Naming


I use something called reporters and annotations in ApprovalTests. It means you can write code like

[UseReporter(typeof(DiffReporter))]

The issue was in discoverability. If you start typing this you get very little help from your editor:

You get some other reporters [KDiff, Tortoise] but not many and these happen to be useless and DiffReporter will use them if they exist on your system anyways.

If you typed "Reporter" instead this goes away and the over 50 options will present themselves to you, but it's not intuitive to this and a few painful usability tests showed this to me over and over as I watch is silence and frustration.



Renaming can fix this

[OnFailure(typeof(ReportWithDiffTool))]

I ran an online poll, most people preferred this. 70% vs the 30% that preferred the previous version.
I think I'm in the 30%, I prefer my classes to have Noun names, but I might be partial because this is what I'm used to.

So I'm faced with changing a lot of the API.

The issue is:


How do I know which one is the right answer?

How do I know there *is* a right answer?


Part II: 2 Pepsis and the world of choice

Malcolm Gladwell did a great Ted Talk on choice. In it he talks about a taste test to find the perfect sweetness for Pepsi. There were 2 peaks, so they averaged them out, but this isn't the right answer, the right answer is there are 2 preferences for sweetness. Because of this as a culture we have changed from a single 'perfect' spaghetti sauce to an aisle of choices


Maybe we should have the same occur with API's?
Maybe the issue isn't to have either UseReporter or OnFailure. Maybe I need to have both.

This requires a bit of finesse. I don't yet have answers on how to version and package these solutions.
Should they be in separate nuget packages? Should they interplay with each other?
How to I balance having a clear way with choice paralysis.

I don't know the answers to these questions, but I am beginning to see that maybe there isn't the one perfect API...