In the thriving landscape of OLX, where trust met commerce, I assumed the mantle of leading design efforts within the Trust and Safety tribe. This pivotal moment marked the inception of an initiative that would reshape the user experience. I found myself collaborating with a diverse ensemble of minds, including Product Managers, Product Analysts, UX Researchers, UX Writers, and Engineers, setting the stage for a journey that transcended mere design.
The Challenge of Perception
Markets were whispering concerns to us—users felt vulnerable to scams and dubious dealings on OLX. Yet, within our organization lingered a conviction that addressing these issues would displease our B2C customers. This entrenched belief meant navigating a relentless wave of skepticism, especially from our markets. Our mission? To disrupt this notion and propel our platform to new heights.
Trust as an Outcome
The heart of our user problem was simple: trust was in short supply. Users hesitated, fearing a raw deal or compromising their safety. Yet, we saw an opportunity—an opportunity for OLX, a marketplace, to infuse trust as a cornerstone of our platform and the user experience.
Trust, we learned, isn’t bestowed arbitrarily. It’s the outcome of multiple pieces of evidence, a pillar of stability and resilience. In its absence, anxiety and uncertainty flourish. This understanding became the North Star guiding our path forward.
Partnering with researcher Patricia, we embarked on a journey of discovery. We sought to understand how users perceived e-commerce, classifieds, and platforms, and how they forged trust across various platforms, OLX included.
Early insights resonated with clarity—users craved a platform where they could voice both praise and concerns about providers, sharing their seller experiences with the world. Transparency emerged as the foundation of trust. The better users comprehended a seller’s offering, down to the minutest detail, the stronger the trust they could forge.
Focusing on the Mainstream
OLX boasted a diverse array of sellers that contributed to a vibrant tapestry of ads. For this project, we set our sights on the platform’s mainstream users, seeking to make an immediate impact before venturing into niche territories. The Goods category, with a particular focus on Electronics, became our arena, aligned with the company’s strategic pivot toward a transactional model.
We convened brainstorming sessions, drawing in a multifaceted audience of PMs, Product Analysts, Engineers, Customer Support Specialists, and UX designers. The guided brainstorming sessions explored opportunities we had unearthed, leading us to a pivotal question: “How might we provide buyers with the tools to build trust in sellers?” The answer became clear—Ratings.
As the project unfolded, we confronted the challenge of measuring the unmeasurable. Together with a product analyst from the Seller Reputation team, we meticulously defined KPIs for the project, aiming for 15% of users to provide feedback and 30% to view others’ feedback. Simultaneously, we sought to reduce customer contact with fraudsters by 10%.
Dividing and Conquering
To execute the project effectively, we bifurcated it into two distinct phases. Phase 1 centered on the collection of buyer ratings, prompting us to design an enticing and user-centric rating system. Our partnership with a product analyst led us to experiment with three rating systems through MVP tests. The result was a streamlined flow encompassing overall ratings and valuable feedback.
Phase 1: Collecting Ratings
The Triggers
Prompt users in different instances. How might we prompt users to rate their experience with another seller? For online users we keep it open, anyone can access the flow. For offline users, we filter them by sending push notifications to those users who had a meaningful conversation
The RATING SYSTEM
Stars, thumbs, or faces? How might we provide users with a rating system that users can relate to and that is easy to understand?
I partnered with the product analyst and we experimented with 3 different ratings systems. We sent an MVP test where we created an MVP of the rating flow, and collected information regarding the flow conversion and the rating distributions.
THE FLOW
Structuring the experience. As for the flow we wanted it to be smooth and simple.
- Overall rating: faces rating about how the experience was in general.
- Feedback: free text where users can explain what was wrong/good in the experience
To ensure the effectiveness of our approach, we collaborated closely with the Research team in Poznan to conduct a Usability Test involving 10 users. The primary aim of this test was to assess the perceived usability of the flow. Additionally, we sought to validate whether we successfully conveyed the playful essence of our new branding principles.
Through this collaborative effort, we gained valuable insights into the user experience, allowing us to refine our design further and align it with our branding objectives. This iterative process helps us ensure that our product not only meets user expectations but also effectively communicates the desired brand identity.
Phase 2: Exposing the Scores
Phase 2 brought us to the juncture of revealing these ratings on the platform. We crafted a scoring system rooted in the Net Promoter Score formula, offering a nuanced understanding to users. Tactical tests and iterative design refinements ensured that the user experience was nothing short of outstanding.
Calculation
The rule for calculating the user’s score based on given ratings is derived from the formula of net promoter score (NPS), translating excellent ratings to “promoters”, bad ratings to “detractors”, and good ratings to “passives”.
As most of the sellers have only one rating, we add 2 or 1 passive ratings to sellers with only 1 or 2 real ratings, respectively.
Scoring system
After conducting extensive user interviews and gaining valuable insights, we developed a scoring system ranging from 1 to 10, which we subsequently categorized into four distinct buckets. This strategic decision was rooted in the user-centered approach we adopted during our research. Our in-depth discussions with users revealed a strong preference for granularity when it comes to consuming information.
They expressed a desire for a scoring system that allows for nuanced distinctions, rather than one that oversimplifies or polarizes their assessments. By creating a scale with a broader range and segmenting it into multiple buckets, we are not only catering to their preferences but also ensuring that our system aligns more closely with their specific needs and expectations.
This approach not only enhances the user experience by providing more refined feedback but also empowers us to gather more detailed data for analysis. It enables us to capture subtle variations in user perceptions and preferences, ultimately enabling us to make more data-driven decisions and further improve our product’s overall quality and effectiveness.
Learnings and Informed Design Decisions
With the rating system successfully launched on iOS and Android, the results were profound. User reception was warm, and the impact on safety perception was substantial. Ratings soared, with 12% of users providing feedback and 27% viewing others’ feedback. Simultaneously, contacts with fraudsters decreased by 3%.
This project was not just important; it was transformative, reshaping company perceptions of ratings at OLX. It placed every design decision under scrutiny, with stakeholders seeking explanations. My compass was data—research insights and data-driven choices. It’s a testament to how a design journey can change not just a platform but also an organization’s perspective on trust and safety.