Keeping Tabs on your Microservice Ecosystem with Redis Pubsub

Kevin Hoffman, Lead Engineer, Capital One

I wanted to be able to remotely monitor, diagnose, and command my microservices. While there are countless tools for some of these, nothing gave me the lightweight, simple interface I wanted. I decided to implement a remote management protocol using Redis pubsub. I now have a dashboard that I can use to get immediate "red/green" feedback on services and instances, as well as issue commands in real-time. In this session I'll talk about the architectural decisions and trade-offs that led me to this, why I chose to use Redis pubsub, and how all of the various components were built in a multi-language, cloud-native microservice environment. Key takeaways will include a discussion of Redis pubsub, the remote management protocol, building clients in Go and the monitoring dashboard and service in Elixir.

How Adobe Leverages Redis to Serve/Secure Billions of API Requests

Ajay Kemparaj, Sr Computer Scientist, Adobe Systems

At Adobe API Platform, redis serves as one of the core critical components, Any API which served by Adobe I/O has some redis involved in it We would cover use cases of Redis in the API-gateway * how we use different Redis data structures to store API publisher metadata, Store JWT tokens, Expire Tokens, Throttling / Rate Limiting, Limiting few users if they are in dev mode as opposed to production mode, hyperloglog for analytics This would cover Redis data structures like hyperloglog, SETS, HASHES, LISTS

Microservices and Redis: A Match made in Heaven

Vijay Vangapandu, Software Architect, eHarmony

eHarmony is an online relationship services provider and since its inception has had 20+ billion total matched, 15+ million matches per day globally, and 1+ million user communications each day. In the competitive dating industry, our product thrives to push more features to users more rapidly, and needs to adopt new processes/technologies to stay cutting edge while keeping existing application stable. In this session, we will discuss the business and product benefits from a microservices architecture and how Redis underpins the data services needed in this architecture. With microservices, our developers gain more independence and can scale smaller, loosely coupled components more easily. Our previous data store was operationally too cumbersome for this new lightweight architecture. We needed a no-sql store that was easy to configure and maintain, was consistent with its high availability, was suitable for different use-cases, and simple but robust. Redis Enterprise’s low-touch operation, 24/7 supported our infrastructure and we expect it to match our growing needs and expectations. Redis and Redis Enterprise is used to supercharge the following use cases in our microservices ecosystem: Authorization Store, Badge Store, Speed store in lambda architecture, Sorted Sets for Security, Configuration Services, Hibernating our second level cache, Lightweight broker to publish messages, Rate Control the API usage by IP/UA, Migrate User Services This session will discuss how we use Redis data structures like Strings, Sets, Hashes, Lists, Sorted Sets and Hyperloglog to deliver our microservices.

The Home Depot: Implementation Patterns to Leverage Redis to Turbo-Charge Existing (Legacy) Applications

Hari Ramamurthy, The Home Depot

There are existing applications that are potentially monolithic, leveraging a relational database. We have experienced that introducing Redis has significantly helped with the performance and scalability of the application, while we work our way to break up the large application over time into a suite of micro-services that is deployed on a PAAS. Implementation patterns using in-memory data structures provided by Redis allowed us to deliver a number of improvements that significantly improved the health and performance of a critical system. This system manages over $25B of business with significant business- process complexity. The usage of Redis to create read-caches, manage concurrency issues, write-through and write-back caches, improves response times by upto 75% and reduced transaction latencies by 30% in scenarios such as: Faster lookups for catalog, config and master data Concurrent writes to RDBMS tables with multiple indexes Distributed lock management while accessing shared resources Increasing parallelism when multiple processes need to access the same data.

Redis at Lyft: 1,000 Instances and Beyond

Daniel Hochman, Lyft

Revisit Lyft's Redis infrastructure at a higher level following last year's focus on geospatial indexing. This year's talk is broadly applicable to optimizing any type of application powered by Redis. This session will take a broader look at Lyft's deployment of Redis and deep dive into what we've learned recently when writing our open-source Envoy Redis proxy. The talk will cover: deployment, configuration, partitioning, consistent hashing, data model optimization, and client abstractions.

Redis Analytics Use Cases

Leena Joshi, Redis Labs

Redis incorporates numerous cool datastructures that make analytics simple and high performance. Join this session to learn about how customers use Redis for real time analytics such as fraud detection, recommendations generation, maintaining distributed counts for metering, estimating element frequencies and reporting. We will also discuss different ways to visualize data in Redis.

Redis is Dead. Long live Redis!

Ryan Luecke, Box, Inc.

Find out how Box migrated from Redis to Redis Cluster to power Recent Files, Webapp Sessions, and more. Discover the obstacles we faced when deploying hundreds of Redis instances inside Redis Clusters, as well as the real-world benefits we enjoy now that we have migrated. Learn about the automation we've built to help keep our Redis Clusters running and maintained.

Remote Monitoring and Controlling Scientific Instrument

Hardeep Sangha, ThermoFisher Scientific

Remote monitoring application allows users to monitor the run data, send remote commands to alter the state of a run, and view analyzed sample files at the end of the run. The gist of this talk is the architectural details for this application with Redis at its core. As part of the talk, we can also discuss the various other data storage options that were looked upon before finalizing on Redis. Brief application design is as follows. Application data ingestion is architected to be server-less using AWS lambda's. Multiple streams of data are funneled from IoT gateway (with IoT Rules) into Kinesis into Lambda's from where it is loaded into Redis. Server-less architecture gives us the scale to respond to increased concurrency levels without having to provision capacity upfront. Although due to the inherent cold-start nature of the lambda's we opted for a pre-provisioned compute capacity (EC2 cluster with auto-scaling, running across multiple zones) for data visualization (data sourced directly from Redis cluster) which makes the visualizations highly performant. Spring data libraries formulate the data access layer. Extensive use of sorted sets to get the stream data automatically sorted upon ingestion. Redis pub-sub and notifications are used for ensuring data integrity and cleaning up any temporary artifacts. The cluster itself is multi-node with read replicas, running across multiple AWS zones for high-availibility and automatic failover.

Writing Modular & Encapsulated Redis Client Code

Jim Nelson, Internet Archive

Programming Redis client code often means breaking encapsulation in order to maximize Redis efficiency, reduce network round-trips, and/or take advantage of transactions. This leads to classic software engineering problems: difficult-to-maintain code, contract violations, tightly-locked dependencies... balls of mud that are fragile and sensitive to change. This talk discusses strategies for developing modular and contained Redis code via concepts such as promises/futures and event notifications. Presented code will be in PHP using the Predis client, but all the concepts presented here should be portable to other languages and libraries.

Increase Application Performance with SQL Auto-Caching; No Code Changes

Roland Lee, HeimdallData

Configuring caches is complex. Engineering teams spend a lot of resources building and maintaining cache subsystems. It requires manual intervention (i.e. what to cache and invalidate) which is risky. Now there is a solution that provides granular SQL analytics while safely auto-caching into RedisLabs. Heimdall Data intelligently caches, invalidates and synchronizes across RedisLabs nodes. No more misconfigurations or stale data. There are zero changes required for your existing application or database. In this Session Learn how easy it is to speed up your application in 5-minutes with Heimdall for RedisLabs Show how Heimdall analytics helps you identify SQL bottlenecks See a demo of Heimdall with RedisLabs in action.

Amazing User Experiences with Redis & RediSearch

Stefano Fratini, Siteminder

What makes a user experience amazing? The way a user interact with the system (interface) and the way the system responds (response time). If Redis is fast on its own, RediSearch makes it an great and easy tool to power autocomplete and full text searches and give your users the experience they deserve. From back to front.

Redis Memory Optimization

Sripathi Krishnan, HashedIn

This is a practical guide to optimizing the memory usage of your redis instance. The talk will introduce internal data structures used by redis - quicklist, intsets, skiplist, hashtables and so on. Then it will introduce how the choice of data structure affects memory usage, and how you as a developer can influence the data structure redis picks up. The talk will also focus on common mistakes developers make that causes a bloat in redis memory consumption. Finally, it will walk through how you can use redis-rdb-tools to diagnose and fix memory problems.

Redis at Heart of Large Data Pipelines

Piyush Verma, Oogway.in

Imagine Distributed systems and Data. Loads of it. This talks explores common scenarios in building large scale Data Pipelines and how Redis helps in addressing those. Some scenarios we will cover include Ordered Delivery, Dead-Letter-Queues, Distributed Wait Groups, De-duplications across moving windows, Idempotence, Discrete and rolling Rate Limitations, Throttling across Shards, Batching a Stream with time of Spatial thresholds among others. The talk is a tale of one of our solutions, deployed with state public transportation department in India, describing the problems encountered, why a solution was picked and how Redis forms an essential part of the solutions.

Brief Data and Ephemeral UIs

Sandro Pasquali, Bulldog and Fisk

Snapchat's great discovery was the power of throwaway data. We are now allowed to destroy data. This has overthrown years of thinking that digital constructions must be preserved, like every mark on every piece of paper is. On this view UIs (a media object; a landing page) are cheap, requiring low storage costs and mere construction cost in terms of time, data access, and data transmission. Redis is fast, and is well appointed with eviction, expiration, and compaction tools. Managed volatility is also the secret weapon of Redis. And now we have Streams. Time and index ordered ranges in Redis Streams can be pulled as needed, quickly, requiring no structural persistence -- a request is always a Stream range and when received a UI bounded by that range is spun up. We can simply build UIs on the fly, dependent solely on data events.

Web UIs are no longer optimistic ghosts, these abstract templates of supposedly real things stored as blueprints, indifferent to whether the information they need to reanimate still exists or has changed in meaning, a scary trap for the user and a sink for their frustration with data synchronization errors and general errors of absence. In this talk we will look at how bounded ranges in Streams can be used to construct a sort of materialized view on data bound as the store feeding a React UI that is constructed just in time and disposed of once the targeted user views it. We'll deploy "IoT" React Components powered by Streams across a serverless "web application" neither whose data nor rendered view demands persistence, helping with scale and distributed architectures.

Tailoring Redis Modules for Your User Needs

Adam Lev-Libfeld, Tamar Technological Solutions

How and When should you approach designing a Redis module for your system? Even though Redis modules have been available for over a year and there is plenty of interest in the possibilities this meta-feature enables, there was little to no adoption of modules written by Redis users to satisfy unique needs. I believe this is mostly due to inability to decern when where and how these modules can be beneficial and difficulty assessing the cost of development of such modules. In this talk, I will present the audience with the tools to understand the conditions under which migrating existing code to a Redis module can (or can't) improve performance and user experience while assisting in containing costs, and how to make use of such. This talk revolves around two the essential concepts: 1 Three common use cases (all of which are from real-life scenarios) 2. Methods and techniques to quickly migrate existing codebase into module form.

The Best Database for Games - A Redis Case Study

Gonzalo Garcia, Etermax

We are a mobile gaming company based out of Buenos Aires, Argentina and the creator behind such popular hits like Trivia Crack and Pictionary. As the leading company in Latin America for social games, we’ve emerged as an industry model in cross platform game development for the region. With a total of 300 million installs, 5 million active users and a peak usage of up to 25 million daily users for our most popular games, our company has grown exponentially and greatly benefited from Redis Enterprise. We use Redis to manage our user sessions, for device management, password authentication, pub-sub, caching for queries that run slow, expiration of game state once a game ends, and storing several user and game-related information, such as Hash for all the questions that a user has answered, and Hash for all users a player’s been matched up against (with scores/won/lost info etc.)

Redis also generates real time analytics, which we use for the following: storing user preferences, which questions were liked/disliked, how many attempts a player makes for each question, how many questions were answered correctly, graphing the difficulty of questions vs users, the time it takes to answer a question, and deciding which questions to serve up next. We also use Redis to search for old user names, generating notifications that games are about to expire, small systems for game features and much more. Additionally, we are seeing several advantages to using Redis on Flash- we have old or inactive users since 2012- with Redis on Flash, our newest/most relevant active user data can be in RAM, and others in Flash, while saving us considerable amounts on infrastructure costs. We now guarantee our users sub-millisecond response times and high throughput, while cutting infrastructure spending by over 70% without compromising on performance.

Fail-Safe Starvation-Free Durable Priority Queues in Redis

Jesse Willet, ProsperWorks, Inc.

ProsperWorks CRM is a cloud-based CRM that integrates with Gmail, Google Drive and the rest of the Google Apps suite. We have growing fleet of asynchronous job types. Many of these touch systems which are slow, are close to their scale limit, have sensitive rate limits, or exhibit frequent transient failures. Moreover our compute containers are terminated at arbitrary times. In most cases we don't know these limits in advance, or these limits change over time. We address these problems with a variety of strategies which will be covered. At the heart of these techniques are "Icks": priority queues with write-folding and two-phase commit implemented in Redis/Lua. Icks are not unlike the BRPOPLPUSH pattern but they offer additional non-starvation guarantees and operational flexibility.

Integrating Redis with ElasticSearch to Get the Best Out of Both Technologies

Dmitry Polyakovsky, Zumobi

Using Redis to help scale ElasticSearch Comparing RediSearch module to ElasticSearch Using ElasticSearch to analyze some of the data in Redis.

Application of Redis in IOT Edge Devices

Glenn Edgar, Lacima Ranch

LaCima Ranch is an Avocado Ranch in Riverside California. The ranch had a limited amount of manual resources, so we spent the last 4 years developing automation systems which we made open source. By accident about 3 years ago, we discovered Redis and incorporated it into our control system. Initially, we wanted to replace SQLite, because of the need for multiprocess operation. The target device was 32 bit Arm processor with 1 G RAM of memory. Redis operated satisfactorily in this environment. Over the years, many the use of Redis evolved into the following functions. 1. Main Primary Store 2. Event Bus 3. Time Series Data Base 4. Graphical Data 5. Search Engine for Log Files Over the years the control system was refactored. What emerged was a core Redis based system, which served as an "apartment complex" for SCADA or control type applications. We found is that most of the functions that an online business uses to monitor its data centers could be replicated on the IOT edge devices. Just as for online businesses, the Redis based "apartment complex" reduces work deploying IOT applications. The objective of this presentation is two-fold. 1. Spur the development of Redis based applications on IOT edge devices. 2. Spur the discussion of how to efficiently interact with Redis database on the IOT device and the cloud. This is an area where we would like inputs The objective of this talk is to promote the development of Redis at the edge device of an IOT device.

Video Experience Operational Insights in Real Time

Aditya Vaidya, Oath Inc

The Video Lifecycle Platform of Verizon Digital Media Service (VDMS) offers solution for content providers to prepare, deliver and monetize live, linear or on demand video content for best in class performance worldwide. Streaming flawless video content to large audiences is challenging especially considering the event being streamed is a global live event like the NFL game. Anything can go wrong during the event and viewers will not tolerate any disruptions. Basically, you have only one chance to get it right and hence it is absolutely critical to have real time operational insights into the quality of video stream so that the resolution time for any video stream issues can be shortened thereby minimizing the disruption time. VDMS’s Real Time Operational Insights Platform gathers data related to quality of video experience as well as audience engagement in real time from video players.

Gathering quality of video experience metrics like buffering performance of different CDNs (Content Delivery Networks) in our multi CDN delivery ecosystem provides the ability to redistribute traffic to another CDN in real time in case the performance of one of the CDN causes increase in buffering. On the other hand, gathering audience engagement metrics such as concurrent video session views on partner properties like yahoo sports, aol etc allows our customers to get an estimate in real time about the scale of audience engagement experienced during a game. This paper describes how Redis powers VDMS’ Real Time Operational Insights Platform, which gathers video playback statistics for millions of concurrent viewers, and aggregates and delivers relevant metrics for multiple dimensions with an end-to-end latency of 20 seconds. We also see how redis with its probabilistic data structure aka HyperLogLog helped us calculate unique concurrents in real time over millions of video sessions using very limited memory.

Redis at LINE, 25 Billion Messages Per Day

Jongyeol Choi, LINE+ Corporation

LINE Messenger is popular in Japan, Taiwan, Thailand, Indonesia and many other countries. There are more than 160 million monthly active users around the world. We usually deliver 25 billion messages per day. We highly depend on Redis, HBase, and Kafka to provide a fast and reliable messaging system. Regarding the usage of Redis in messaging, we use more than 60 clusters and more than 10,000 Redis server processes on over 1,000 high-performance physical machines. Many of the clusters are not only cache but primary storage clusters. This presentation will focus on how we use Redis with other storage systems for messaging and how we develop and operate internal Redis-related facilities to handle high traffic. I will talk about how we created a client-based sharding system without proxy, as well as an internal monitoring system, and how we adopted asynchronous Redis client to the production system. I will also talk about our recent experience with the official Redis Cluster. The main goal of this talk is to show how we develop and operate Redis facilities for a messaging system.

Managing 300 Billion API Calls / Month with Redis

Iddo Gino, RapidAPI

RapidAPI serves over 300B API calls / month, picking at over 1 Million calls per second. For each call, RapidAPI performs several steps, including authenticating the API key & API subscription, and logging the request to perform analytics & billing. Being a proxy to APIs provided by Fortune 500 companies, RapidAPI has to keep the latency of those operations to single milliseconds. RapidAPI uses Redis (on RedisLabs) as an analytics DB, streaming in writes to log API requests (including information like latency, quotas used and response codes). The data is later used to present real-time API usage analytics and generate invoices for API consumption. Our billing engine uses subscription information found in our main (SQL) data base, combined with Redis analytics to generate invoices. Generating a single invoice usually involves touching 10,000-100,000 Redis keys, and thus we use a special query language to query the SQL & Redis database together to optimize those reads. During the talk, we’ll present: - Benchmark we did with multiple time-series databases (streaming 10k-20k writes / second) to explain why we chose Redis - The schema we use to store time series data aggregations in Redis, optimized for fast writes - How our real time dashboard pull changes from Redis - Our billing engine, using a special query language which ‘JOIN’s data between our SQL and Redis databases (Open source).

Migrating from Coherence to Redis

Vivek Rajput, RCI

Rci.com web application heavily uses caching for services. Currently we are using Oracle Coherence for caching. We have decided to migrate to an open source technology to save cost, reduce complexity and improve performance. On our research Redis platform topped over other technologies in meeting our requirement. This paper will explain how we architected our new system and what KPI's we measured during the migration process.

From Twitter to Redis Streaming Data

Jean Winget, Wgdesign

Mapping disaster relief scenarios requires accurate information from multiple source. Twitter feeds combined with the Redis streaming data is a new approach to delver data to the mapping site and first responders.

The Versatile of Redis: Powering Our Critical Business Using Redis

Luciano Sebanca, Movile

Step by Step, Redis became one of our key tools for our critical business. The main advantage we found on Redis were it’s versatile and performance: we use Redis as cache, database and distributed lock. With Redis we now have much more resilient systems with a very high throughput. We will talk on how we use Redis to store information about more than 160 millions of phones with sub-milliseconds latency. We will also show how we use redis to do lock with load balancing between the clients. Using Redis inside our architecture we enhanced the client lead’s purchase and saved some money in media campaigns.

A Recommendation Engine For The Faint Hearted: Using Redis Sorted Sets And Set Theory

Kinane Domloje, Vinelab

A first dive into recommendation engines is a daunting challenge, where to start from? Which tools to use? Is real time recommendations possible? Which recommendation algorithm to follow? How fast the recommendations can be served? Are some of the questions that pops to mind. Answering all the above requires research and a deep understanding of the underlying mechanisms. Considering the steep learning curve that Redis offer, using Redis as a recommendation engine will render the technology accessible to all. In this talk we intend to showcase how a recommendation engine can easily be implemented using Redis, sorted sets and set theory; taking advantage of in-memory computation, a single data type and basic understanding of math. Our proposed algorithm is agnostic to the recommendation algorithm business decide on, be it content based or collaborative filtering, with the right data model at hand our algorithm is replicable. With a content based recommendation engine in mind, due to its simplicity and adequacy to convey our message, we will focus on: 1- The design of a data model that fits the purpose. 2- The application of set theory principles. 3- The use of intermediary, temporary, sorted sets to chain operations. 4- The use of dynamic parameters to tweak recommendations.

Ad Serving Platform Using Redis

Sankalp Jonna, GreedyGame

User-based Targeting is the core feature of an ad network company. Modern ad network needs complex targeting with advanced inclusion and exclusion rule set based on attributes like location, game, OS, device, etc. For high performance targeting architecture should able to perform quick set operations. It needs faster DB with support for core set operations natively. Redis, the popular open source, in-memory database is known for its in-memory set operations capability. It helps us serve over 200M ad requests monthly with lesser computing, only three 2-core 8GM ram, infrastructure. It takes about 10 milliseconds to serve each ad request and only 500 microseconds for individual set operations in Redis. This paper outlines the algorithm and code necessary to implement campaign targeting based on core targeting parameters.