Yap Labs
Nullius in verba


I've been a designer, engineer, and product manager in one fashion or another for over 15 years.  This is a sampling of the projects I've worked on over the years.


Company: Reppler helps people manage their social network reputations across Facebook, LinkedIn, Twitter and others.

Solution: Reppler monitors your social feeds for negative trends, spam, even inappropriate language.

Timeline: Jun 2010 - May 2011


Reppler helps people manage their social network reputations across Facebook, LinkedIn, Twitter and others. In the modern age of social networks, managing and understanding reputation and perception on social networks is becoming critical. Reasons for rejecting potential candidates range from drug use, drinking, lying about qualifications, etc. Reppler helps monitor for this behavior and remove it before becoming a problem.

According to a frequently sited Microsoft survey from 2010, over 70% of employers have rejected candidates based on information found online. During another survey commissioned by Reppler over 91% of 300 hiring types admitted to using social media as a filtering mechanism.

In addition to monitoring reputation and content, Reppler will…

  • monitor links for malware, phishing and other malicious content
  • present information available from different networks on employment history to reconcile differences
  • scan and present images posted on various networks for inappropriate content
  • calculate a Venn diagram of how your individual Twitter, Facebook and LinkedIn social graphs overlap
  • and, much much more

Methods Used

  • Heuristic analysis
  • SSNiF identification. This process was developed by Philip Haine to help focus designs by forcing every feature to be explained with a stakeholder and situation that informs the need (Stakeholder, Situation, Need, Feature). Ultimately, the needs become the primary focus of the design.
  • Wireframes
  • Visual design
  • Usability testing
  • Front-end development

Example Problem: Needed a dynamic representation of information

The core design we finally landed on is relatively flat with a summary view of four main areas, each having a detailed view that can be drilled into. We found balancing the information density was critical to success. People needed enough to feel informed and in control, but not overwhelmed with information.

Additionally, it didn’t make sense to refresh the entire page every time new content is displayed. This traditional approach caused serious lag issues, because we decided to authenticate through Facebook Connect and needed some relatively hefty libraries for rendering graphs (Highcharts). Granted, much of this could be cached during the first page load, but I wanted to further optimize the experience by creating a single-page application that would allow much more efficient updates pushed from the server via JSON responses.

Example Solution: Information architecture and a pseudo-MVC javascript framework

Starting from a dashboard and drilling into the details helped mitigate information overload. I also limited options available at each step, so more advanced options were kept deeper in the information architecture for power-users without confusing typical users. By progressively revealing more information and options at each level, the total IA was ultimately only three-levels deep (overview, details, advanced options). This approach tested well throughout several releases.

Although we didn’t get around to proper server-client bi-directional communication, a combination of long-polling and a pseudo-MVC front-end solution did keep the front-end up-to-date without refreshing the page. I had hoped to use something like Sprout or Backbone, but, at the time, I found them either immature or too opinionated for what we needed. I was particularly sensitive to learning curve, because I had only a few weeks to finish visual design and build the entire front-end. So, I assembled several technologies and techniques into a quick, rough MVC.

Now, I fully admit the architecture could be improved upon. The entire team was the CTO and co-founder, Vlad Gorelik, one back-end engineer, one part-time QA engineer, and me. So, I was responsible for pretty much everything customer facing – from concept to production delivery. I don’t really think of myself as an engineer at all, but I suppose I can hold my own if needed. If I were to build something like this today, I’d seriously consider Angular and D3. Both of these projects have come a long way in the last couple years.


  • Grew product to over 50k active users in the first year
  • Company acquired by TrustedID in 2012


Reppler was reasonably successful. However, I believe it was a bit ahead of its time. Although there were plenty of examples and research to support the need for reputation management in 2010, very few people in the population at large understood the need. By 2013 or 2014, I think Reppler would have stood a better chance of becoming widely adopted. I also think some of the social graph analysis algorithms developed for Reppler could show interesting trends given a larger dataset. Most of the social graph analysis I’ve read tends to focus on a single network (e.g., LinkedIn, Facebook, Twitter, etc.); however, Reppler had a more complete set of data across all of the networks for each user. There were novel trends we started to see, but 50k users simply isn’t a large enough sample to draw many conclusions from.