Reppler manages individual’s social network reputations.

June 2010 - May 2011

As the principle UX Designer and UI Engineer, I was responsible to both design and build the entire interface while two other engineers (and co-founders) developed the back-end.

Reppler helps people manage their social network reputations across Facebook, LinkedIn, Twitter and others. In the modern age of social networks, managing and understanding reputation and perception on social networks is becoming critical. Reasons for rejecting potential candidates range from drug use, drinking, lying about qualifications, etc. Reppler helps monitor for this behavior and remove it before becoming a problem.

According to a frequently sited Microsoft survey from 2010, over 70% of employers have rejected candidates based on information found online. During another survey commissioned by Reppler over 91% of 300 hiring types admitted to using social media as a filtering mechanism.

Results

  • Grew product to over 50k active users in the first year.

  • Company acquired by TrustedID in 2012.

  • Monitored links and content for malware, phishing, and other malicious content.

  • Presented information available from different networks on employment history to reconcile differences.

  • Scanned and presented images posted on various networks for inappropriate content.

  • Calculated a Venn diagram of how your individual Twitter, Facebook, and LinkedIn social graphs overlap.

Conclusions

Reppler was successful. However, I believe it was a bit ahead of its time. Although research supported the product premise of reputation management, very few people understood the need in 2010. By 2013 or 2014, I think Reppler would have stood a better chance of becoming widely adopted.

The social graph analysis algorithms developed for Reppler would have shown valuable trends with a more robust data set. Reppler’s approach to analysis across social graphs was novel at the time, and remains uncommon years later.

Dynamic representation of information efficiently delivered and managed by the client

An example from the project

Research and Approach

The core design we finally landed on is relatively flat with a summary view of four main areas, each having a detailed view that can be drilled into. We found balancing the information density was critical to success. People needed enough to feel informed and in control, but not overwhelmed with information.

Additionally, it didn’t make sense to refresh the entire page every time new content is displayed. This traditional approach caused serious lag issues, because we decided to authenticate through Facebook Connect and needed some relatively hefty libraries for rendering graphs (Highcharts). Granted, much of this could be cached during the first page load, but I wanted to further optimize the experience by creating a single-page application that would allow much more efficient updates pushed from the server via JSON responses.

Principle skills

  • SSNiF identification

  • Heuristic analysis

  • Wireframes

  • Low and high-fidelity mockups

  • Visual design

  • Usability testing

  • Product management

  • Engineering management

Design and Engineering

Starting from a dashboard and drilling into the details helped mitigate information overload. I also limited options available at each step, so more advanced options were kept deeper in the information architecture for power-users without confusing typical users. By progressively revealing more information and options at each level, the total IA was ultimately only three-levels deep (overview, details, advanced options). This approach tested well throughout several releases.

Although we didn’t get around to proper server-client bi-directional communication, a combination of long-polling and a pseudo-MVC front-end solution did keep the front-end up-to-date without hard refreshes. I had hoped to use something like Sprout or Backbone, but, at the time, I found them either immature or too opinionated for what we needed. I was particularly sensitive to learning curve, because I had only a few weeks to finish visual design, and build the entire front-end. So, I assembled several technologies and techniques into a quick, rough MVC to get a proof-of-concept out the door.

Now, I fully admit the architecture could be improved upon. The entire team was the CTO and co-founder, Vlad Gorelik, one back-end engineer, one part-time QA engineer, and me. So, I was responsible for pretty much everything customer facing – from concept to production delivery. I don’t really think of myself as an engineer at all, but I suppose I can hold my own if needed. If I were to build something like this today, I’d seriously consider Angular and D3. Both of these projects have come a long way in the last couple years.

Early wireframes

Main dashboard

This landing page organized information into four major groups. Overall reputation/impression, inappropriate content, network dynamics, and privacy/security risks. Each quadrant could then be drilled into for additional information.

The left side kept the current user and overall score visible throughout the app.

Drilled into the impression details

This page is broken into a few sections. The first section shows actionable items to help users identify ways to improve their social network impression. For example, you may have liked controversial or inaccurate pages/posts, which negatively impacts the overall impression others generally have of you.

Below that, AI sentiment and tonal analysis provides additional insight. This considers both content the user has generated, and content posted by others the user follows.

Lastly, the word-cloud and graphs at the bottom provide context. For example, seeing a huge number of Facebook wall source material coming from questionable sources may negatively impact your reputation.

Drilled into the social network details

The top section creates a Venn diagram to describe how your various social networks overlap. Interestingly, we found an average overlap of ~62% between Facebook/Twitter and LinkedIn for most users. This suggests personal and professional worlds are far more intertwined than most people recognize.

This also gave a snapshot of recent content, and comparative information provided on the different networks. So, when changing a job or status users could easily verify the different networks were at least consistent.

Front-end architecture diagram

Lazy-loading libraries and content facilitated much more efficient client and loading times. An MVC paradigm facilitated a highly interactive user experience with minimal network load. It also allowed the server to optimize and cache expensive calculations (e.g., network overlap for the Venn diagram), while providing the client with sufficient information for smooth transitions and animations.

The plan was to replace some of the custom rolled tools with a standard framework such as Backbone.js, and implement web sockets once they had better browser support or polyfills. This architecture kept everything nice and tidy for those transitions.

Previous
Previous

HumanConcepts

Next
Next

Bumpr