How Adobe Leverages Redis to Serve/Secure Billions of API Requests

Ajay Kemparaj, Sr Computer Scientist, Adobe Systems

At Adobe API Platform, Redis serves as one of the core critical components. Any API served by Adobe I/O has some Redis involved in it. We would cover use cases of Redis in the API-gateway – how we use different Redis data structures to store API publisher metadata, store JWT tokens, expire tokens, throttling / rate-limiting, limiting few users if they are in dev mode as opposed to production mode, hyperloglog for analytics. This would cover Redis data structures like hyperloglog, SETS, HASHES and LISTS.

Microservices and Redis: A Match made in Heaven

Vijay Vangapandu, Software Architect, eHarmony

eHarmony is an online relationship services provider. Since its inception has had 20+ billion total matched, 15+ million matches per day globally, and 1+ million user communications each day. In the competitive dating industry, our product thrives to push more features to users more rapidly, and needs to adopt new processes/technologies to stay cutting edge while keeping existing application stable. In this session, we will discuss the business and product benefits from a microservices architecture and how Redis underpins the data services needed in this architecture. With microservices, our developers gain more independence and can scale smaller, loosely coupled components more easily. Our previous data store was operationally too cumbersome for this new lightweight architecture. We needed a no-sql store that was easy to configure and maintain, was consistent with its high availability, was suitable for different use-cases. Simple but robust.

Redis Enterprise’s low-touch operation, 24/7 supported our infrastructure and we expect it to match our growing needs and expectations. Redis and Redis Enterprise is used to supercharge the following use cases in our microservices ecosystem: Authorization Store, Badge Store, Speed store in lambda architecture, Sorted Sets for Security, Configuration Services, Hibernating our second level cache, Lightweight broker to publish messages, Rate Control the API usage by IP/UA, Migrate User Services. This session will discuss how we use Redis data structures like Strings, Sets, Hashes, Lists, Sorted Sets and Hyperloglog to deliver our microservices.

Redis at Lyft: 1,000 Instances and Beyond

Daniel Hochman, Lyft

Revisit Lyft's Redis infrastructure at a higher level following last year's focus on geospatial indexing. This year's talk is broadly applicable to optimizing any type of application powered by Redis. This session will take a broader look at Lyft's deployment of Redis and deep dive into what we've learned recently when writing our open-source Envoy Redis proxy. The talk will cover: deployment, configuration, partitioning, consistent hashing, data model optimization, and client abstractions.

Redis Analytics Use Cases

Leena Joshi, Redis Labs

Redis incorporates numerous cool datastructures that make analytics simple and high performance. Join this session to learn about how customers use Redis for real time analytics such as fraud detection, recommendations generation, maintaining distributed counts for metering, estimating element frequencies and reporting. We will also discuss different ways to visualize data in Redis.

Redis is Dead. Long live Redis!

Ryan Luecke, Box, Inc.

Find out how Box migrated from Redis to Redis Cluster to power Recent Files, Webapp Sessions, and more. Discover the obstacles we faced when deploying hundreds of Redis instances inside Redis Clusters, as well as the real-world benefits we enjoy now that we have migrated. Learn about the automation we've built to help keep our Redis Clusters running and maintained.

Remote Monitoring and Controlling Scientific Instrument

Hardeep Sangha, ThermoFisher Scientific

Remote monitoring application allows users to monitor the run data, send remote commands to alter the state of a run, and view analyzed sample files at the end of the run. The gist of this talk is the architectural details for this application with Redis at its core. As part of the talk, we can also discuss the various other data storage options that were looked upon before finalizing on Redis. Brief application design is as follows. Application data ingestion is architected to be server-less using AWS lambda's. Multiple streams of data are funneled from IoT gateway (with IoT Rules) into Kinesis into Lambda's from where it is loaded into Redis. Serverless architecture gives us the scale to respond to increased concurrency levels without having to provision capacity upfront. Although due to the inherent cold-start nature of the lambdas, we opted for a pre-provisioned compute capacity (EC2 cluster with auto-scaling, running across multiple zones) for data visualization (data sourced directly from Redis cluster) which makes the visualizations highly performant. Spring data libraries formulate the data access layer. Extensive use of sorted sets to get the stream data automatically sorted upon ingestion. Redis pub-sub and notifications are used for ensuring data integrity and cleaning up any temporary artifacts. The cluster itself is multi-node with read replicas, running across multiple AWS zones for high-availability and automatic failover.

Writing Modular & Encapsulated Redis Client Code

Jim Nelson, Internet Archive

Programming Redis client code often means breaking encapsulation in order to maximize Redis efficiency, reduce network round-trips, and/or take advantage of transactions. This leads to classic software engineering problems: difficult-to-maintain code, contract violations, tightly-locked dependencies... balls of mud that are fragile and sensitive to change. This talk discusses strategies for developing modular and contained Redis code via concepts such as promises/futures and event notifications. Presented code will be in PHP using the Predis client, but all the concepts presented here should be portable to other languages and libraries.

Increase Application Performance with SQL Auto-Caching; No Code Changes

Roland Lee, HeimdallData
Erik Brandsberg, CTO, Heimdall Data

Configuring caches is complex. Engineering teams spend a lot of resources building and maintaining cache subsystems. It requires manual intervention (i.e. what to cache and invalidate) which is risky. Now there is a solution that provides granular SQL analytics while safely auto-caching into Redis Labs. Heimdall Data intelligently caches, invalidates, and synchronizes across Redis Labs nodes. No more misconfigurations or stale data. There are zero changes required for your existing application or database. In this Session, learn how easy it is to speed up your application in 5-minutes with Heimdall for Redis Labs, show how Heimdall analytics helps you identify SQL bottlenecks, and see a demo of Heimdall with Redis Labs in action.

Amazing User Experiences with Redis & RediSearch

Stefano Fratini, Siteminder
Leo Wei, Tech Lead / Solution Architect, Siteminder

What makes a user experience amazing? The way a user interact with the system (interface) and the way the system responds (response time). If Redis is fast on its own, RediSearch makes it an great and easy tool to power autocomplete, full text searches, and give your users the experience they deserve from back to front.

Redis Memory Optimization

Sripathi Krishnan, HashedIn

This is a practical guide to optimizing the memory usage of your Redis instance. The talk will introduce internal data structures used by Redis: quicklist, intsets, skiplist, hashtables, and so on. Then it will introduce how the choice of data structure affects memory usage, and how you as a developer can influence the data structure Redis picks up. The talk will also focus on common mistakes developers make that causes a bloat in Redis memory consumption. Finally, it will walk through how you can use Redis-RDB-tools to diagnose and fix memory problems.

Train and Serve Real-Time AI Models in Live Production with Redis, Kafka, TensorFlow, GPUs, and Kubernetes

Chris Fregly, PipelineAI, Founder and CTO

Applying my Netflix experience to a real-world problem in the ML and AI world, I will demonstrate a full-featured, open-source, end-to-end TensorFlow Model Training and Deployment System using the latest advancements with TensorFlow, Kubernetes, OpenFaaS, GPUs, and PipelineAI.

In addition to training and hyper-parameter tuning, our model deployment pipeline will include continuous canary deployments of our TensorFlow Models into a live, hybrid-cloud production environment.

This is the holy grail of data science - rapid and safe experiments of ML / AI models directly in production.

Following the famous Netflix Culture that encourages "Freedom and Responsibility", I use this talk to demonstrate how Data Scientists can use PipelineAI to safely deploy their ML / AI pipelines into production using live data.

Offline, batch model training is for the slow and weak. Online, real-time model predicting on live production data is for the fast and strong.

Learn to be fast and strong by attending this talk.

Tailoring Redis Modules for Your User Needs

Adam Lev-Libfeld, Tamar Technological Solutions

How and When should you approach designing a Redis module for your system? Even though Redis modules have been available for over a year and there is plenty of interest in the possibilities this meta-feature enables, there was little to no adoption of modules written by Redis users to satisfy unique needs. I believe this is mostly due to inability to discern when, where, and how these modules can be beneficial and difficulty assessing the cost of development of such modules. In this talk, I will present the audience with the tools to understand the conditions under which migrating existing code to a Redis module can (or can't) improve performance and user experience while assisting in containing costs, and how to make use of such. This talk revolves around two the essential concepts: 1. Three common use cases (all of which are from real-life scenarios) 2. Methods and techniques to quickly migrate existing codebase into module form.

Redis on Flash: A Game-Changing Database Platform for Etermax

Gonzalo Garcia, Etermax
Esteban Masoero, Technical Owner of the Platform team, Etermax

New session description: Etermax, a mobile gaming company and the creator behind such popular hits like Pictionary and Trivia Crack, is the leading company in Latin America for social games. We’ve emerged as an industry model in cross platform game development for the region. With a total of 300 million installs, 20 million daily active users and a peak usage of up to 25 million users for our most popular games, our company has grown exponentially and greatly benefited from Redis Enterprise’s high scalability and storage capabilities.

Join us in this session to learn how we use Redis Enterprise to manage user sessions, device management, password authentication, Pub/Sub, query caching and much more. We will discuss our use Redis Enterprise to store several user and game related information using Hash. We will illustrate how we use Redis for real-time analytics to store user preferences, likes/dislikes, attempts on questions and correct answers, graphing the difficulty in questions and identifying the order of questions to serve up. We also use Redis to search for old user names, generating notifications that games are about to expire, small systems for game features, and more.

Additionally, we will share details on the several advantages of using Redis on Flash. With Redis on Flash, we store old and inactive user data on Flash SSDs and our newest/relevant active users on RAM. With the proven Redis speed and the cost-effectiveness of Flash SSDs, we now guarantee our users sub-millisecond response times and high throughput, while cutting infrastructure spending by over 70%.

We inject Redis for many use cases; compared to SQL-based databases, Redis on Flash was able to handle the ops/sec we required with the least amount of infrastructure, cutting our costs, and resources tremendously.

Fail-Safe Starvation-Free Durable Priority Queues in Redis

Jesse Willett, ProsperWorks, Inc.

ProsperWorks CRM is a cloud-based CRM that integrates with Gmail, Google Drive, and the rest of the Google Apps suite. We have a growing fleet of asynchronous job types. Many of these touch systems which are slow, are close to their scale limit, have sensitive rate limits, or exhibit frequent transient failures. Moreover our compute containers are terminated at arbitrary times. In most cases we don't know these limits in advance, or these limits change over time. We address these problems with a variety of strategies which will be covered. At the heart of these techniques are "Icks": priority queues with write-folding and two-phase commit implemented in Redis/Lua. Icks are not unlike the BRPOPLPUSH pattern but they offer additional non-starvation guarantees and operational flexibility.

Integrating Redis with ElasticSearch to Get the Best Out of Both Technologies

Dmitry Polyakovsky, Zumobi
  • Using new Redis Streams and RediSearch to work with time series data
  • Moving data between Redis and ElasticSearch
  • Building rich dashboards to visualize data and doing ad hoc analysis
  • Sharing practical solutions based on personal experience and real world problems

Application of Redis in IOT Edge Devices

Glenn Edgar, Lacima Ranch

LaCima Ranch is an Avocado Ranch in Riverside California. The ranch had a limited amount of manual resources, so we spent the last 5 years developing automation systems which we open sourced. By accident about 4 years ago, we discovered Redis and incorporated it into our control system. Initially, we wanted to replace SQLite, because of the need for multiprocessing operation. The target device was 32-bit Linux on an Arm processor with 1G RAM of memory. Redis operated satisfactorily in this environment.

Over the years we found that Redis became central to our controller software. Redis served the role of a “Swiss Army Knife” in our software providing the services of a Primary Data Store, Event Bus, Time Series Streams, Job Queues and RPC Queues. Time Series streams were handled by JSON serialized lists. The introduction of the new Streams data type in Redis will make handling streams much more effective. Over time, our software went from a large Linux process to many small processes. We found that processes could be limited to talking to just the Redis database. This feature made software development and testing much easier.

The complexity of the system resulted in us realizing that our control system needed to be managed as a cloud system, rather than as an embedded system. We found that introducing new functions into our system was getting very complex. Data produced in one process needed to be consumed in many other processes. Redis key and data management across multiple processes became very difficult. Redis introduced a Graph database module, and this greatly simplified the problem. Redis keys and the access classes could automatically be generated, so the user no longer had to manually name a Redis key.

We had problems with our cloud integration, in that the software on the edge device and software on the cloud lived different lives. Coordination between the two became difficult, and for several years we tried different mechanisms but with limited success. To solve this problem, we noticed that graph on the edge device is a subset of the system graph. Applications could run on the edge or in the cloud equally well. The only restriction is that data the application needed in the Redis database had to be present on their respective servers.

This produced a new IoT edge to cloud communication paradigm. Instead of communicating by messages, the communication should be in replicating parts of the Redis database in the cloud and on the gateway device. Since all Redis accesses are done by centralized handlers, moving data between the edge gateway database and the cloud database can be handled in a modular fashion by various means. Hopefully in the future, this mechanism could be built into the Redis database.

Video Experience Operational Insights in Real Time

Aditya Vaidya, Oath Inc

The Video Lifecycle Platform of Verizon Digital Media Service (VDMS) offers solution for content providers to prepare, deliver, and monetize live, linear or on demand video content for best in class performance worldwide. Streaming flawless video content to large audiences is challenging especially considering the event being streamed is a global live event like the NFL game. Anything can go wrong during the event and viewers will not tolerate any disruptions. Basically, you have only one chance to get it right and hence it is absolutely critical to have real time operational insights into the quality of video stream so that the resolution time for any video stream issues can be shortened thereby minimizing the disruption time. VDMS’s Real Time Operational Insights Platform gathers data related to quality of video experience as well as audience engagement in real time from video players.

Gathering quality of video experience metrics like buffering performance of different CDNs (Content Delivery Networks) in our multi CDN delivery ecosystem provides the ability to redistribute traffic to another CDN in real time in case the performance of one of the CDN causes increase in buffering. On the other hand, gathering audience engagement metrics such as concurrent video session views on partner properties like Yahoo Sports, AOL, etc allows our customers to get an estimate in real time about the scale of audience engagement experienced during a game. This paper describes how Redis powers VDMS’ Real Time Operational Insights Platform, which gathers video playback statistics for millions of concurrent viewers, and aggregates and delivers relevant metrics for multiple dimensions with an end-to-end latency of 20 seconds. We also see how Redis, with its probabilistic data structure (aka HyperLogLog), helped us calculate unique concurrents in real time over millions of video sessions using very limited memory.

Redis at LINE, 25 Billion Messages Per Day

Jongyeol Choi, LINE+ Corporation

LINE Messenger is popular in Japan, Taiwan, Thailand, Indonesia and many other countries. There are more than 160 million monthly active users around the world. We usually deliver 25 billion messages per day. We highly depend on Redis, HBase, and Kafka to provide a fast and reliable messaging system. Regarding the usage of Redis in messaging, we use more than 60 clusters and more than 10,000 Redis server processes on over 1,000 high-performance physical machines. Many of the clusters are not only cache but primary storage clusters. This presentation will focus on how we use Redis with other storage systems for messaging and how we develop and operate internal Redis-related facilities to handle high traffic. I will talk about how we created a client-based sharding system without proxy, as well as an internal monitoring system, and how we adopted asynchronous Redis client to the production system. I will also talk about our recent experience with the official Redis Cluster. The main goal of this talk is to show how we develop and operate Redis facilities for a messaging system.

Migrating from Coherence to Redis

Vivek Rajput, RCI
Balaji Mariyappan, Solutions Engineer, RCI web application heavily uses caching for services. Currently we are using Oracle Coherence for caching. We have decided to migrate to an open source technology to save cost, reduce complexity, and improve performance. In our research, Redis platform topped over other technologies in meeting our requirement. This paper will explain how we architected our new system and what KPI's we measured during the migration process.

From Twitter to Redis Streaming Data

Jean Winget, Wgdesign

Mapping disaster relief scenarios requires accurate information from multiple source. Twitter feeds combined with the Redis streaming data is a new approach to deliver data to the mapping site and first responders.

The Versatile of Redis: Powering Our Critical Business Using Redis

Luciano Sabenca, Movile
Eiti Kimura, Architect, Movile

Step by step, Redis became one of our key tools for our critical business. The main advantage we found on Redis were it’s versatile and performance. We use Redis as cache, database, and distributed lock. With Redis we now have much more resilient systems with a very high throughput. We will talk on how we use Redis to store information about more than 160 millions of phones with sub-milliseconds latency. We will also show how we use Redis to do lock with load balancing between the clients. Using Redis inside our architecture we enhanced the client lead’s purchase and saved some money in media campaigns.

Ad Serving Platform Using Redis

Sankalp Jonna, GreedyGame

User-based Targeting is the core feature of an ad network company. Modern ad network needs complex targeting with advanced inclusion and exclusion rule set based on attributes like location, game, OS, device, etc. For high-performance targeting, architecture should able to perform quick set operations. It needs faster DB with support for core set operations natively. Redis, the popular open source, in-memory database is known for its in-memory set operations capability. It helps us serve over 200M ad requests monthly with lesser computing, only three 2-core 8GM ram, infrastructure. It takes about 10 milliseconds to serve each ad request and only 500 microseconds for individual set operations in Redis. This paper outlines the algorithm and code necessary to implement campaign targeting based on core targeting parameters.

Serving Automated Home Valuation with Redis and Kafka

Jiaqi Wang, Redfin

As a technology powered real estate brokerage, one of the most important questions Redfin can answer for homeowners across the entire country is how much is my home worth, a.k.a. the Redfin Estimate. To make it a service that supports various use cases on web, mobile app, email, and partner products, we leverage Redis’ high performance as a caching layer and Kafka stream processing as a warming/invalidation mechanism to decrease the service response time while maintaining high availability and consistency.

Redis Fault-Injection

Khalid Lafi, Misk Innovation Lab

Redis from the point-of-view of the user is a simple and an easy to use system. Consequently, Redis became an essential building block in an increasing number of complex and mission-critical systems. Having built a system that relies on Redis to store its transient state. Khalid realized that to have full confidence in my system, he needed to understand its behavior when Redis starts failing. So he created an abstraction on top of the client library they were using, which gave him hooks to inject failures in almost all parts that touch Redis. Which helped him find a number of critical bugs, that would've otherwise taken days to debug in prod. Seeing how useful this technique is, Khalid created RedFI, an open source fault-injection Redis proxy. He decided to make it a proxy/sidecar, so it would be language-agnostic, to help most of the community.

Techniques for Synchronizing In-Memory Caches with Redis

Ben Malec, Paylocity

For some highly-accessed value, a network roundtrip incurs too much latency. An obvious solution would be adding an in-memory caching layer, but that brings many challenges around keeping data in sync across multiple clients. This presentation will detail the approach Paylocity implemented, which leverages Redis Pub/Sub, bucketing keys to minimize synchronization message length, and carefully exploiting order-of-operation to eliminate the need for a master synchronization clock.

Implementing a New Data Structure for Redis as an Inexperienced C Programmer

Loris Cro

I have little experience with C programming and yet I wrote and published Redis-cuckoofilter, a Redis module that implements Cuckoo filters.

Starting from the original paper that presents the data structure, I worked my way up to a functioning module. While the Redis Module ecosystem is new and still subject to change, the APIs are already nice enough to build something upon, and have a good time while doing it, even as a programmer that doesn't have extensive knowledge of the C ecosystem.

This talk is an overview of my journey developing the aforementioned module and the main things that I learnt from that experience, which include: being smart about testing, thinking about API design choices, and more.

My Other Car is a Redis Cluster

Ramon Snir, Dynamic Yield

Dynamic Yield doesn't see itself as a "Redis shop", but that hasn't stopped it from adopting Redis for dozens of use cases. Over three years we reached tens of thousands of transaction per second and amassed over a terabyte of unique data (non-cache) in our Redis databases. We see Redis as an integral "second database" in our architecture: it allows us to bring in extra real-time and low-latency properties to our machine learning and personalization systems. Being able to adapt to new requirements without rearchitecturing and being able to scale both throughput and sizes with Redis Cluster, Redis Enterprise and Redis Enterprise on Flash make Redis a common part of most of our projects.

Event-Driven Microservices with Event Sourcing and Redis

Tom Mooney,

When building a microservices based system, one of the core problems that must be addressed is how to share data between services while maintaining availability and consistency. Using an event-driven architecture allows services to remain highly decoupled, while allowing data to be used in new and novel ways in the future that cannot be anticipated now, without the need to make sweeping changes to the existing system. Event Sourcing is a pattern for implementing an event-driven system that offers atomic operations in a distributed system without the need for expensive and complicated distributed transactions. Redis is a great technology for implementing Event Sourcing because it is already in use in many organizations, offers outstanding performance, and has built in pub/sub functionality. This presentation will focus on some of the key concerns with using Redis as a persistent event store and event bus. It will provide a brief overview of the Event Sourcing pattern, and how Redis can be used as a primary data store for persisting event data when implementing this pattern. It will also include discussion of how to use the Redis pub/sub capability and Redis data structures to create a reliable queue that will allow data to be streamed to multiple consumers asynchronously while minimizing the possibility of data loss.

Your First Microservice with Kubernetes, Helm Charts, and Redis

Dan Garfield, Codefresh

We'll go through when it's right to split up microservices and leverage Kubernetes, Helm, and Redis.

Global Redis Clusters, a Romantic Comedy

Kurt Mackey,

How we manage state across 15 datacenters and expose distributed services to customers via a javascript api.

Designing a Redis Client for Humans

Marc Gravell, Stack Overflow

The Redis client story is generally pretty healthy, with a wide range of client libraries available covering almost every mainstream platform/language, and a few besides - but the features and usability and vary widely - especially with the changing demands of more recent features like Cluster (3.0) and Modules (4.0). In this session Marc will draw on his experience with developing multiple Redis libraries and tools to discuss the good, the bad, the ugly and the hopeful future of the Redis client experience.

Transforming Vulnerability Telemetry with Redis Enterprise

Darren Chinen, Malwarebytes

Malwarebytes, the industry-leading anti-malware and internet security software company,also leads in innovative security tools. One such tool is the interactive remediation map, a worldwide security vulnerability tool that casts an eye on malwares with real-world scans. Join us to learn how Redis Enterprise helps us visualize data in real-time, provides global information on threats detected, how a virus is conceived, and the eventual spread through the world. In this talk, we will illustrate how we use Redis Enterprise to drive analytics for heat maps and for the customer-facing OMNI portal. We will also describe our use of Redis Enterprise for fast customer data ingestion, session management, time-series analysis and predictive analytics.

Using a Redis Backend in a Serverless Application with Kubeless

Simon Bennett, Bitnami

Kubernetes which has emerged has the de-facto standard for containerized applications is a perfect system to build a serverless solution on top. In this talk will introduce Kubernetes quickly by showing how one can deploy a Redis cluster using the Helm package manager and then we will show how Kubeless, a serverless solution can be used to build a frontend API that uses the deployed Redis cluster as backend. The talk will highlight how Redis is deployed in Kubernetes and how it can be used quickly using functions.

Redis Dev, Test, and Production with OpenShift Service Catalog

Ryan Jarvinen, Red Hat

Learn how Red Hat uses Kubernetes and the Open Service Broker to deliver customized instances of Redis, adapted to meet the unique needs and use-cases present in Dev, Test, and Production environments. This talk includes a short overview of the Open Service Broker and related technologies, before moving on to demos and example code. Come see the workflow benefits that are possible when using a container-based PaaS to automate the configuration, delivery, and management of your Redis services.

Scaling With Whitepages Global Identity Graph Dataset Without Sacrificing on Latency

Heather Wade, Varun Kumar, Jason Frazier, Whitepages, Inc.

Whitepages Inc. is the global leader in digital identity verification for businesses and consumers. They service consumers on their open search web properties that see over 55 million visitors accessing North America contact information and public records and provides businesses with global enterprise-scale APIs and web tools to fight fraud and identify good customers at a million plus transactions daily. To best service their expansive user and customer base, Whitepages developed its own fully-integrated, high-availability Identity Graph database which houses over 5 billion global identity records as a connected graph. When it came time to evaluate databases to cache their online contact records, Redis was a top contender for the speed and scale capabilities. Join Whitepages to learn how Redis on Flash helps them deal with their already large, and ever-growing, amounts of data they store - without sacrificing on response time and high throughput; how simple and easy it was for them to integrate Redis on Flash, and how Redis Enterprise performed better than others with less infrastructure.

Building Lightweight Microservices Using Redis

Carlos Justiniano, Chief Software Architect, Flywheel Sports

Building Microservices can be a daunting experience. Small teams often struggle with the dizzying array of tools required to build, test and maintain services. In this talk, we'll examine how to address key microservice concerns using Redis as one of the few external infrastructure dependencies! This talk is for individuals who have started or are about to embark on a journey to build Microservices and are wondering how Redis can help. Two years ago we set out to embrace Microservices. When we started, we had a small team of developers with experience building Node / ExpressJS applications using NoSQL databases and Redis. What we collectively lacked was experience with the overwhelming array of tools used to build and deploy microservices in the cloud. Tools such as etcd, consul, and zookeeper for service discovery, and Docker, Kubernetes for containerization and orchestration. The issue we faced was slowing feature development in order to ramp up on the necessary tools OR "right-size" a solution to meet our needs. We chose the latter and set out to create a small microservices library which wouldn't be overwhelming for Node and ExpressJS developers. Our efforts resulted in two libraries and a few supporting tools. At the core of our microservice solution is Redis. By building our Hydra ( library using Redis, we were able to address the following key microservice concerns. * Service discovery * Presence management * Interservice messaging over HTTP REST and/or Redis pub/sub * Built-in load balancing and routing * Service-level message queues * Near zero configuration Fast forward to today, our use of Redis and our Hydra Microservices library powers our Live video streaming fitness classes via our nationwide FlyAnywhere platform. Full proposal here.

Making Real-Time Predictive Decisions with Redis-ML

Tague Griffith, Head of Developer Advocacy, Redis Labs

Most of the energy and attention in machine learning focused on the model training side of the problem. Multiple frameworks, in every language, provide developers with access to a host of data manipulation and training algorithms, but until recently developers had virtually no frameworks to build out predictive engines from trained ML models. Most developers resorted to building custom applications, but building highly available, highly performant applications is difficult. Redis in conjunction with the Redis-ML module provides a server framework for developers to build predictive engines with familiar, off-the-shelf components. Developers can take advantage of all the features of Redis to deliver faster and more reliable prediction engines with less custom development. This talk is a technical session which examines how Redis can be used in conjunction with a variety of training platforms to deliver real-time predictive and decision making features as part of a larger system. To set the context for the session, we start with an introduction to the Redis data model and how features of Redis (namespace, replication) can be used to build fast predictive engines (at scale), that are more reliable, more feature rich and easier to manage than custom applications. From there, we look at the model serving capabilities of Redis-ML and how they can be integrated with an ML pipeline to automate the entire model development process from training to deployment. The session ends with a demonstration of a simple machine learning pipeline. Using scikit-learn we train several example models, load them directly into Redis and demonstrate Redis as the predictive engine for making real-time recommendations.

Introducing Real-Time Insights with the RediSearch Aggregations

Dvir Volk, Senior Architect, Redis Labs

This session will introduce users to the new Redis Query (RQL) language and how they can employ it in their applications.

CRDTs and Redis—From Sequential to Concurrent Executions

Carlos Baquero, Assistant Professor, Universidade do Minho

With CRDB, Redis now supports geo-replication and allows each replica site to update the system with no immediate coordination, leading to very high availability and low operation latency. In other words, it supports multi-master replication (aka active-active). From the user application perspective, the system cannot be seen anymore as a single sequential copy, since now operations can be processed concurrently at different locations. Conflict-free Replicated Data Types (CRDTs) can take away a lot of the complexity when migrating from a sequential to a concurrent setting. In this talk we will explore a bit of the path in this transition and cover what can be expected in Redis data types operating under active-active deployments. Concurrent behaviour will be explored for cases such as Redis Sets, Strings (both as registers and as counters), and other types.

Building Active-Active Geo-Distributed Applications with the New Redis CRDTs

Cihan Biyikoglu, VP of Product Management, Redis Labs

Mission critical applications look for geo-redundancy for protection against regional failures from WAN connectivity issues to colossal natural disasters. Research around CRDTs outline safe, lightweight and predictable mathematical models for handling active-active geo distributed database applications that can obtain strong eventual consistency. In this talk, we will look at common CRDT uses for building database applications with high velocity and variety of data. We'll see demos of how Redis's CRDT implementation provide a simple but safe active-active geo-distributed database platform for applications looking for active-active topologies without compromising throughput or latencies.

The Design of Redis Streams

Salvatore Sanfilippo, Creator of Redis

In this session, Salvatore Sanfillipo, the creator of Redis, will discuss the design and implementation of the newest Redis data type - streams.

Common Redis Use Cases for Cloud Native Apps and Microservices

Nathaniel T. Schutta, Software Architect, Pivotal

As businesses transform their traditional applications into Cloud Native applications, they turn to born-in-cloud technologies like Redis. This talk will dive deep into common ways Redis breathes new life into tired old apps, such as globally distributed session store, pub-sub, message queue, asynchronous messaging and streaming data.

Redis Enterprise on Cloud-Native Platforms

Vick Kelkar, Uri Shachar, Redis Labs

Breaking down monolith applications into microservices offers advantages like faster deployments and faster build times of your containerized applications. In order to manage your containerized applications, you will have to rely on container orchestration offered by cloud-native platforms.

In the presentation, we will introduce you to concepts of cloud-native platforms. We will focus on couple container platforms like Pivotal Cloud Foundry and Kubernetes. We will discuss the similarities and differences between these platforms and what are the ideal use-cases for each platform.

We will introduce Redis enterprise software and share our integration efforts with the two platforms. We will discuss how we designed and integrated Redis Enterprise with the platforms. We plan to conclude by showing a demo our integrations on these cloud native platforms.

Scaling Slack’s Job Queue Using Kafka and Redis

Matthew Smillie, Saroj Yadav, Slack

Slack is used by millions of people, and its Redis-based job queue processes over 1.4 billion jobs at a peak rate of 33,000 per second. The talk will detail the work undertaken to integrate Kafka with the existing highly-performant Redis queue, which we outlined in this blog post, For the talk, we hope to expand on the technical aspects of the blog post, in particular providing more detail on: - the implementation of our Redis-based queue - our operational experience scaling this implementation as Slack underwent significant growth in both features and users - the outage that led us to re-evaluate our approach (because who doesn't love a good disaster story) - the decision to keep Redis as the core of our job queue We'll discuss Kafka's role in the resulting architecture. Along with allowing us to maintain our well-proven Redis architecture, it improved availability on the enqueue side of the queue, as well as improved resiliancy to data loss. The key takeaways from the talk:

  • operational experience with Redis in a real-time environment, at large sacle, including a detailed postmortem case study
  • the design trade-offs that we considered to continue to scale the job queue
  • engineering methodology to implement the change without disruption in a fast-moving environment

Redis Cluster Provisioning with Kubernetes Service-Catalog Extension

Cedric Lamoriniere, Amadeus

As Kubernetes has gained momentum, it has become the standard solution for orchestrating containerized applications. And now a complete ecosystem is growing up with this solution. The Kubernetes "Service Catalog" API extension has been developed with the goal of enabling applications running on the platform to easily use externally managed software offerings, such as a datastore service. It provides a generic way to list, provision and bind externally managed services with your application. In this session, you'll learn how your application can interact with the "Redis broker" through the service-catalog API in order to provision a Redis cluster tailored for your application needs. The solution that is presented is based on the "Redis Operator" that aims to operate a Redis cluster in Kubernetes (presented last year during this conference), and a new component the called the "Redis broker" that implements the "service-catalog" API specification for the Redis use-case.

Real-Time Log Analytics Using Probabilistic Data Structures in Redis

Srinivasan Rangarajan, Mad Street Den

There are different ways to analyze streaming logs and obtain metrics and insights from it. Processing high velocity streaming data real time and accurately is hard in practice. But there are places where accuracy can be traded-off for memory usage and scalability. In this talk I will explain how we used Probabilistic Data Structures and Redis Modules to build a very simple log processing and metrics dashboard. Probabilistic Data Structures (PDS) can be used to count/measure very specific metrics, like counting items in a set, checking for memberships, etc. Using PDS, we were able to build a very simple dashboard system which tracked a few key metrics using very less resources than traditional Log processing systems like ELK/TICK stack.

Redis Data Structures for Non-Redis People

Alvin Richards, Redis Labs

In this talk, we will cover the basic Redis data-structures and how these are applied to solve common use cases - rate limiting, session stores, inventory control etc. We will take the view from the developer perspective on how to use these structures, but provide insights on the implication to the Redis server at runtime. This is an ideal talk if you are new to Redis, and especially if you come from a Relational, Document or other NoSQL database.

The Versatility of Redis: Powering Our Critical Business Using Redis

Luciano Sebanca, Eiti Kimura, Movile

Step by step, Redis became one of our key tools for our critical business. The main advantage we found on Redis were it’s versatile and performance: we use Redis as cache, database and distributed lock. With Redis we now have much more resilient systems with a very high throughput. We will talk on how we use Redis to store information about more than 160 millions of phones with sub-milliseconds latency. We will also show how we use Redis to do lock with load balancing between the clients. Using Redis inside our architecture we enhanced the client lead’s purchase and saved some money in media campaigns.

Serving Predictions with Online Machine Learning Using Redis and Redis Enterprise

Harish Doddi, Datatron Technologies

Using Redis advanced data structure and Machine Learning support to build high performance online machine learning model, which deals with techniques for doing machine learning in an online setting, i.e. where you train your model a few examples at a time, rather than using the full dataset (offline learning). I am going to explain how we internally leverage Redis as a data source for online learning, serving layer for machine learning models built at Datatron.

Time-Series Data in Real-time with Redis-Time-Series Module

Danni Moiseyev, Redis Labs

This talk will introduce a Redis-time-series module. The module is allowing Redis users to store and query time series data in Redis with basics. The talk will go over the basics usage of inserting and querying data, as well as more advanced usage with data downsampling. Also, we will explain how the memory model behind the data type and how its different than other Redis data structures.

Redis Analytics Use Cases

Leena Joshi, Kyle Davis, Redis Labs

Redis incorporates numerous cool data structures that make analytics simple and high performance. Join this session to learn about how customers use Redis for real time analytics such as fraud detection, recommendations generation, maintaining distributed counts for metering, estimating element frequencies and reporting. We will also discuss different ways to visualize data in Redis.

Auto-Scaling Redis Caches

Irfan Ahmad, CEO, Cache Physics

Redis caches in modern systems must be manually tuned and sized in response to changing application’s workload. A balance must be achieved between cost, performance and revenue loss from cache sizing mis-matches. However, caches are inherently nonlinear systems making this exercise equivalent to solving a maze in the dark.

Until now!

CachePhysics has created the industry’s first auto-scaling solution for caches leveraging breakthrough predictive algorithms. Imagine your Redis Cache self-tuning and right-sizing in real-time in response to changing workloads, thereby unlocking huge improvements in cost, efficiency, performance and observability.

In this session, we cover how auto-scaling Redis cache delivers impressive gains in performance and observability together with a live demo of Redis cache scaling with changing workload

  • Why static Redis caches leave so much performance on the floor?
  • What is a auto-scaling for Redis cache and how does the cache automatically adapt to changing application workloads?
  • The efficiency, QoS, performance SLAs/SLOs and cost tradeoffs that Auto-scaling caches enable
  • Live demo of auto-scaling Redis cache as workload changes

Redis on Google Cloud Platform

Gopal Ashok, Karthi Thyagarajan, Google Cloud

Google Cloud Platform is a global scale service reaching more than a billion users daily. We continue to make GCP easier for developers to build high scale applications and Redis is a critical component of any application that requires extreme performance. Come learn the different ways you can leverage the power of Redis with Google Cloud.

Easy and Repeatable Kubernetes Development with Skaffold and Redis

Matt Rickard, Google

What does the developer workflow look like with Redis in a cloud-native world? We’ll take you step by step with setting up an easy and repeatable developer workflow on Kubernetes with microservices and Redis. This workflow will cover the whole developer lifecycle: local development, CI/CD, and production. We’ll cover tools like skaffold, minikube, and everything else you need as a developer to get up and running with Redis on Kubernetes.

The Benefits of Persistent Memory

Andy Rudoff, Intel Corporation

Persistent memory is an emerging, disruptive technology with the properties of both memory and storage. It allows applications to organize their data into three tiers: storage, memory, and the new persistent memory tier. What does this mean for Redis? This talk will focus on the expected capabilities of persistent memory and the various ways Redis might leverage it. In addition, the current state of persistent memory support in Linux will be summarized.

Open Source Built for Scale: Redis in Amazon ElastiCache Service

Andi Gutmans, Amazon Web Services

Amazon Web Services is an active contributor to the open source community with contributions in machine learning (Apache MXNet), IoT (Amazon FreeRTOS), security (S2N), and databases. We offer a fully managed Redis service that combines the speed, simplicity, and versatility of open-source Redis with manageability, security, and scalability from Amazon.

In this talk, we will talk about open source software at Amazon in areas from Machine Learning to IoT. We'll share our experiences running some of the largest Redis clusters in the world and our contributions to Redis. We will also cover a significant contribution we are making to the open source Redis project and discuss what we built, why it matters, and what value it provides.

Re-Architecting Redis-on-Flash with Intel 3DX Point™ Memory

Martin Dimtrov, Intel

Redis on Flash (RoF) accommodates large data-sets at a lower cost per GB, without sacrificing application performance in most use-cases, by combining the capacity of DRAM and flash SSDs. RoF makes this possible through the intelligent placement and migration of hot and cold data between DRAM and flash. In particular, certain data-structures and performance sensitive data are always resident in DRAM, while other data are allowed to migrate from DRAM to flash.

Intel 3D XPoint™ Memory is a new revolutionary technology, which dramatically increases memory capacity while maintaining low access latencies. RedisLabs and Intel are collaborating to build the next version of RoF with Intel 3D XPoint Memory.

In this session, Intel and RedisLabs engineering team will share their collaborative journey of RoF with 3D XPoint™ Memory. They will introduce 3D XPoint and cover some of its usage models in RoF. They’ll also talk about design consideration and challenges they overcame for RoF to fully take advantage of the new memory technology.

Deduplicating Data Streams with Bloom Filters

Cristian Castiblanco, Scopely

Session abstract: Success story on how we used Bloom Filters (ReBloom module) to replace our idempotence stores, which reduced our Redis memory usage by more than 4x. We also used them to add an event-level deduplication layer to our real-time data streams, that process more than 2 billion events per day.

Redis Unique

Eran Sandler, AppSharp

Unique IDs are required in any database system. When these system are big generating a unique ID becomes a bottle neck. RedisUnique is a redis module supporting multiple unique ID generation algorithms and can be used as part of an existing Redis or a dedicated one. Interfacing with it is as simple as sending a redis command and it can be used in combination with Redis as a Primary Datastore to generate unique ids for your data. We will go over some unique ID formats, what are they good for, and how to use RedisUnique in various ways.

Ultra Scaling with Redis Enterprise

David Maier, Anna Iskra, Redis Labs

This session will introduce several best practices in being able to scale Redis with Redis Enterprise. We will also demonstrate how we achieved our benchmark 30M ops/sec with just 18 AWS EC2 instances.

At Redis Labs, we manage thousands of databases, of different sizes, throughput and use cases. We will talk about real-life scaling challenges our customers face every day, and how Redis Labs helps to overcome them.

The Redis Cluster API allows us to scale infinitely and in a linear manner by just adding shards and nodes. It relies on the intelligence of the clients to decide to which endpoint to send the request based on the key part of an item and the hashing function shared across the clients and the cluster.

This talk explains how the Redis Cluster API works and how you can leverage it for ultra-scalabilty to deliver many millions of operations per second.