There is an interesting discussion on the kanbandev Yahoo Group exploring how best to plan and track UX work that impacts all of the features of a product. This would certainly impact the regular flow of the team if some UX work was going to have to be switched live on Day X. There are a number of constraints that Jose shares:
- UX people are part of the team, their work must be visualised with the rest of the team
- UX workflow is different to the user stories, no dev necessarily as it may just be investigation for future stories
In some respects you can leave a UX task like this on the board for a long time. Similar to an extended Sprint Zero.
I think a whole product UX revamp can still be broken down into manageable chunks such as Phase 1 / 2 / 3 / etc. You could then decompose that further into Design, Validation, User Testing, etc. Rolling out all of these changes on Day X can still be achieved although it would be difficult to batch up all of that work. Not to mention the waste as it sat there in the source code, unavailable to users.
Some suggestions:
- use feature flags to roll out features one at a time and get feedback from beta customers
- capture the value behind this feature flag until all of the features are ready to switched on
- keep the whole UX revamp story on the board in a different class of service, visibility is key for the whole team
- split out cycle time for this class of service so it doesn’t impact the rest of the board
Fun times revamping the whole UX for a product!
Speaking of which, GreenHopper 5.9.1 is available today. Our team has been hiding features from mainstream customers and only making them available for our beta customers. We slowly role new features out to all customers as they ‘solidify’ – not the whole product at once, although over time it will end up in a whole product UX revamp. So, I understand where Jose and the others on kanbandev are coming from.
You discuss how the UX team in integrated into development throughout the process. I suppose they are making suggestions and design recommendations, but I’m wondering, what these recommendations are based on? Do you have best practices (that the developers in the previous version were obviously unaware of)? Or do you examine the behavior of users in the previous version to see what behaviors have been undesirable (and if so, what behaviors would you determine as undesirable) and then after measuring the undesirable behaviors (another question, how to measure?) then go back to the design of the application to see what is causing these behaviors? OR, do you just do it by gut? And lastly, after you make these changes, how do you know you were successful at improving the UX? I’m currently engaged in UX measurement, so this is really interesting to me.
Hi Philip,
Jose doesn’t provide too much background on kanbandev as to the history. Here is what I can tell you from our experience on GreenHopper:
– the product evolved over 24 months in a very ad hoc fashion, early days and you want to get customers. Respond to their feedback, sell, grow.
– Atlassian acquired GreenHopper and we set about bringing the visual language into alignment with other Atlassian products – commonly referred to as ‘lipstick on a pig’.
– After 18 months and tremendous traction we hit a wall with the UX of GreenHopper. We could no longer implement features without impacting existing user experience.
– We knew that the user experience was challenging for new users. Begin a process to develop a new UX (green fields) and iterate based on customer feedback.
– Capture feedback using the JIRA Issue Collector (http://bit.ly/GAUN6L). Respond, much like in the early days.
– Introduce opt-in in-product analytics to track behaviour and usage.
– Bring a dedicated UX person into the GreenHopper team.
– Use the data to iterate on features and rethink the approach.
The data these days is the key driver for changes. If we can’t demonstrate that a feature benefits our customers, and they actually use it, then why would we build it? It is waste.
We have always had an open and public backlog (https://jira.atlassian.com/browse/GHS) which led to great conversations with customers. We still use that, although now we have much better insight into actual usage.
Does that go some way to answering your question?
How are you measuring UX success today?
Thanks Philip,
Nick
” Introduce opt-in in-product analytics to track behaviour and usage”. Is this a tool that you can insert into any atlassian product, or perhaps customized just for Greenhopper?
We do the same thing, we call it usability logging, but the code has to be inserted into the application, similar to yours I guess.
I believe I met you in BJ by the way with Go2Group. Is that right? Phil
That is not a product today – it is an internal framework that was originally developed using Google Analytics to capture events. We’ve since switched the data storage and reporting to an internally built solution. Where can I learn more about your usability logging Phil?
We use custom code to collect data based on events, and the stats we are trying to collect.
Since usability and user experience, is task dependent and user type dependent, we first segregate user types (can do by age of id, or count time online, or whatever you deem appropriate and reasonable), then you design the task, and note the amount of times successful in completing the task, and the time to complete, as well as errors. So you end up with 2×2 matrix (complete/non-complete, errors/no-errors) and a few in between. Then start collecting times to complete the task as well. But ours is custom so not as robust as a solution using google analytics code.