If you’re working on something that users actually use, then you’re most likely also acquiring data en masse. When it comes to free text feedback, this data might get lost or stay in the hands of some analysts. How to take a few easy steps, to turn that data into actionable steps instead.
Over the past year, close to 2,000 people applied to be in a mentorship, but while browsing our database, I found that only 48% of people got accepted. I wanted to find out why mentors decided to reject mentees, and how I could take steps to make the process easier.
Acquiring the data
When an application gets rejected, mentors can add a tiny piece of feedback to their decision. I always recommend to product builders to collect feedback wherever they can – understanding why your users do what they do is invaluable, and it usually only takes a second to provide that feedback. With that in mind, I was able to really easily filter all examples with feedback attached, using the Django ORM.
Done! The result? I had 200 applications in hand with some feedback attached. I iterated through those results, and wrote all pieces of feedback in a text file. I’ve got my data.
Things looked messy at first. This is really a case-to-case thing, but to get your data in a clean form, I’d recommend doing all of these:
- Remove all special characters. We’re mostly interested in words, not signs.
- Same thing with punctuation
- You most likely have some escaped characters in your texts. Get rid of all `n`, `t` and similar.
- If you see any recurring texts, might want to get rid of them too (for me, I had a lot of “None” or “No Reply” in there – not helpful)
Let’s assume we’re all cleaned up now, let’s continue.
Making a Word Cloud
, which would allow me to do exactly that.
I was naive at first, feed my whole document to the tool, and this is what it spit out.
Maybe some signal in that, but it’s highly diluted by some words that I’d expect to be included in every sentence, and some others that I’d expect in every single piece of feedback. Let’s filter those out.
For the second part, I simply added some of the words that don’t provide much value for me. Looking at that initial word cloud, things like “Mentor” or “Good luck”. Let’s try again.
Now, not perfect, but at least I can identify some sentences which ring my alarm bells. What do we do with that now?
Right off the bat, I can identify some key issues that mentors face when deciding if they want to take on a new mentor:
- Time / Busy / Unavailable: It seems mentors may simply lack the time to take on a new mentee. They should be able to signal that beforehand
- Accepted another / New Mentee: Sometimes multiple mentees apply at once. The time investment in the first few days is big, so two at the same time doesn’t work out.
There are also some words and sentences which are more ambiguous. How are they used in context? In these cases, I need to go back and look for feedback sentences using these words.
- Interest: In what context is this used? It’s a common word in rejections.
- Available: The combination “Not available” is not as common as available itself. Why are people rejecting if they are “available”?
- Find Someone / Someone Else: Is it common that somebody else is more suited? Are mentees applying to the wrong people?
Usually, you can identify some things right away, others need context. It’s a good way to filter vast amounts of unstructured data like this and boil them down to a smaller set.
So, what do we get out of this? With a few tricks, I was able to turn 200 different, unstructured sentences to a set of maybe a dozen keywords which are interesting to me. There’s no bias, no way I could ignore the big and bold words that appear on my screen.
Next up, I can apply the same principle to all the other unstructured data I have access to: Why did people apply? Where did they find us? Why do people cancel their mentorships? In the same way, you can use this when building something of your own. It’s difficult to get honest feedback, and this is the most foolproof way to get there.