How Adobe Leverages Redis to Serve/Secure Billions of API Requests

Ajay Kemparaj, Sr Computer Scientist, Adobe Systems

At Adobe API Platform, redis serves as one of the core critical components, Any API which served by Adobe I/O has some redis involved in it We would cover use cases of Redis in the API-gateway * how we use different Redis data structures to store API publisher metadata, Store JWT tokens, Expire Tokens, Throttling / Rate Limiting, Limiting few users if they are in dev mode as opposed to production mode, hyperloglog for analytics This would cover Redis data structures like hyperloglog, SETS, HASHES, LISTS

Microservices and Redis: A Match made in Heaven

Vijay Vangapandu, Software Architect, eHarmony

eHarmony is an online relationship services provider and since its inception has had 20+ billion total matched, 15+ million matches per day globally, and 1+ million user communications each day. In the competitive dating industry, our product thrives to push more features to users more rapidly, and needs to adopt new processes/technologies to stay cutting edge while keeping existing application stable. In this session, we will discuss the business and product benefits from a microservices architecture and how Redis underpins the data services needed in this architecture. With microservices, our developers gain more independence and can scale smaller, loosely coupled components more easily. Our previous data store was operationally too cumbersome for this new lightweight architecture. We needed a no-sql store that was easy to configure and maintain, was consistent with its high availability, was suitable for different use-cases, and simple but robust. Redis Enterprise’s low-touch operation, 24/7 supported our infrastructure and we expect it to match our growing needs and expectations. Redis and Redis Enterprise is used to supercharge the following use cases in our microservices ecosystem: Authorization Store, Badge Store, Speed store in lambda architecture, Sorted Sets for Security, Configuration Services, Hibernating our second level cache, Lightweight broker to publish messages, Rate Control the API usage by IP/UA, Migrate User Services This session will discuss how we use Redis data structures like Strings, Sets, Hashes, Lists, Sorted Sets and Hyperloglog to deliver our microservices.

Redis at Lyft: 1,000 Instances and Beyond

Daniel Hochman, Lyft

Revisit Lyft's Redis infrastructure at a higher level following last year's focus on geospatial indexing. This year's talk is broadly applicable to optimizing any type of application powered by Redis. This session will take a broader look at Lyft's deployment of Redis and deep dive into what we've learned recently when writing our open-source Envoy Redis proxy. The talk will cover: deployment, configuration, partitioning, consistent hashing, data model optimization, and client abstractions.

Redis Analytics Use Cases

Leena Joshi, Redis Labs

Redis incorporates numerous cool datastructures that make analytics simple and high performance. Join this session to learn about how customers use Redis for real time analytics such as fraud detection, recommendations generation, maintaining distributed counts for metering, estimating element frequencies and reporting. We will also discuss different ways to visualize data in Redis.

Redis is Dead. Long live Redis!

Ryan Luecke, Box, Inc.

Find out how Box migrated from Redis to Redis Cluster to power Recent Files, Webapp Sessions, and more. Discover the obstacles we faced when deploying hundreds of Redis instances inside Redis Clusters, as well as the real-world benefits we enjoy now that we have migrated. Learn about the automation we've built to help keep our Redis Clusters running and maintained.

Remote Monitoring and Controlling Scientific Instrument

Hardeep Sangha, ThermoFisher Scientific

Remote monitoring application allows users to monitor the run data, send remote commands to alter the state of a run, and view analyzed sample files at the end of the run. The gist of this talk is the architectural details for this application with Redis at its core. As part of the talk, we can also discuss the various other data storage options that were looked upon before finalizing on Redis. Brief application design is as follows. Application data ingestion is architected to be server-less using AWS lambda's. Multiple streams of data are funneled from IoT gateway (with IoT Rules) into Kinesis into Lambda's from where it is loaded into Redis. Server-less architecture gives us the scale to respond to increased concurrency levels without having to provision capacity upfront. Although due to the inherent cold-start nature of the lambda's we opted for a pre-provisioned compute capacity (EC2 cluster with auto-scaling, running across multiple zones) for data visualization (data sourced directly from Redis cluster) which makes the visualizations highly performant. Spring data libraries formulate the data access layer. Extensive use of sorted sets to get the stream data automatically sorted upon ingestion. Redis pub-sub and notifications are used for ensuring data integrity and cleaning up any temporary artifacts. The cluster itself is multi-node with read replicas, running across multiple AWS zones for high-availibility and automatic failover.

Writing Modular & Encapsulated Redis Client Code

Jim Nelson, Internet Archive

Programming Redis client code often means breaking encapsulation in order to maximize Redis efficiency, reduce network round-trips, and/or take advantage of transactions. This leads to classic software engineering problems: difficult-to-maintain code, contract violations, tightly-locked dependencies... balls of mud that are fragile and sensitive to change. This talk discusses strategies for developing modular and contained Redis code via concepts such as promises/futures and event notifications. Presented code will be in PHP using the Predis client, but all the concepts presented here should be portable to other languages and libraries.

Increase Application Performance with SQL Auto-Caching; No Code Changes

Roland Lee, HeimdallData
Erik Brandsberg, CTO, Heimdall Data

Configuring caches is complex. Engineering teams spend a lot of resources building and maintaining cache subsystems. It requires manual intervention (i.e. what to cache and invalidate) which is risky. Now there is a solution that provides granular SQL analytics while safely auto-caching into RedisLabs. Heimdall Data intelligently caches, invalidates and synchronizes across RedisLabs nodes. No more misconfigurations or stale data. There are zero changes required for your existing application or database. In this Session Learn how easy it is to speed up your application in 5-minutes with Heimdall for RedisLabs Show how Heimdall analytics helps you identify SQL bottlenecks See a demo of Heimdall with RedisLabs in action.

Amazing User Experiences with Redis & RediSearch

Stefano Fratini, Siteminder
Leo Wei, Tech Lead / Solution Architect, Siteminder

What makes a user experience amazing? The way a user interact with the system (interface) and the way the system responds (response time). If Redis is fast on its own, RediSearch makes it an great and easy tool to power autocomplete and full text searches and give your users the experience they deserve. From back to front.

Redis Memory Optimization

Sripathi Krishnan, HashedIn

This is a practical guide to optimizing the memory usage of your redis instance. The talk will introduce internal data structures used by redis - quicklist, intsets, skiplist, hashtables and so on. Then it will introduce how the choice of data structure affects memory usage, and how you as a developer can influence the data structure redis picks up. The talk will also focus on common mistakes developers make that causes a bloat in redis memory consumption. Finally, it will walk through how you can use redis-rdb-tools to diagnose and fix memory problems.

Redis at Heart of Large Data Pipelines

Piyush Verma, Oogway

Imagine Distributed systems and Data. Loads of it. This talks explores common scenarios in building large scale Data Pipelines and how Redis helps in addressing those. Some scenarios we will cover include Ordered Delivery, Dead-Letter-Queues, Distributed Wait Groups, De-duplications across moving windows, Idempotence, Discrete and rolling Rate Limitations, Throttling across Shards, Batching a Stream with time of Spatial thresholds among others. The talk is a tale of one of our solutions, deployed with state public transportation department in India, describing the problems encountered, why a solution was picked and how Redis forms an essential part of the solutions.

Brief Data and Ephemeral UIs

Sandro Pasquali, Bulldog and Fisk

Snapchat's great discovery was the power of throwaway data. We are now allowed to destroy data. This has overthrown years of thinking that digital constructions must be preserved, like every mark on every piece of paper is. On this view UIs (a media object; a landing page) are cheap, requiring low storage costs and mere construction cost in terms of time, data access, and data transmission. Redis is fast, and is well appointed with eviction, expiration, and compaction tools. Managed volatility is also the secret weapon of Redis. And now we have Streams. Time and index ordered ranges in Redis Streams can be pulled as needed, quickly, requiring no structural persistence -- a request is always a Stream range and when received a UI bounded by that range is spun up. We can simply build UIs on the fly, dependent solely on data events.

Web UIs are no longer optimistic ghosts, these abstract templates of supposedly real things stored as blueprints, indifferent to whether the information they need to reanimate still exists or has changed in meaning, a scary trap for the user and a sink for their frustration with data synchronization errors and general errors of absence. In this talk we will look at how bounded ranges in Streams can be used to construct a sort of materialized view on data bound as the store feeding a React UI that is constructed just in time and disposed of once the targeted user views it. We'll deploy "IoT" React Components powered by Streams across a serverless "web application" neither whose data nor rendered view demands persistence, helping with scale and distributed architectures.

Tailoring Redis Modules for Your User Needs

Adam Lev-Libfeld, Tamar Technological Solutions

How and When should you approach designing a Redis module for your system? Even though Redis modules have been available for over a year and there is plenty of interest in the possibilities this meta-feature enables, there was little to no adoption of modules written by Redis users to satisfy unique needs. I believe this is mostly due to inability to decern when where and how these modules can be beneficial and difficulty assessing the cost of development of such modules. In this talk, I will present the audience with the tools to understand the conditions under which migrating existing code to a Redis module can (or can't) improve performance and user experience while assisting in containing costs, and how to make use of such. This talk revolves around two the essential concepts: 1 Three common use cases (all of which are from real-life scenarios) 2. Methods and techniques to quickly migrate existing codebase into module form.

Redis on Flash: A Game-Changing Database Platform for Etermax

Gonzalo Garcia, Etermax
Esteban Masoero, Technical Owner of the Platform team, Etermax

New session description: Etermax, a mobile gaming company and the creator behind such popular hits like Pictionary and Trivia Crack, is the leading company in Latin America for social games. We’ve emerged as an industry model in cross platform game development for the region. With a total of 300 million installs, 20 million daily active users and a peak usage of up to 25 million users for our most popular games, our company has grown exponentially and greatly benefited from Redis Enterprise’s high scalability and storage capabilities.

Join us in this session to learn how we use Redis Enterprise to manage user sessions, device management, password authentication, Pub/Sub, query caching and much more. We will discuss our use Redis Enterprise to store several user and game related information using Hash. We will illustrate how we use Redis for real-time analytics to store user preferences, likes/dislikes, attempts on questions and correct answers, graphing the difficulty in questions and identifying the order of questions to serve up. We also use Redis to search for old user names, generating notifications that games are about to expire, small systems for game features and more.

Additionally, we will share details on the several advantages of using Redis on Flash. With Redis on Flash, we store old and inactive user data on Flash SSDs and our newest/relevant active users on RAM. With the proven Redis speed and the cost-effectiveness of Flash SSDs, we now guarantee our users sub-millisecond response times and high throughput, while cutting infrastructure spending by over 70%.

We inject Redis for many use cases; compared to SQL-based databases, Redis on Flash was able to handle the ops/sec we required with the least amount of infrastructure, cutting our costs and resources tremendously.

Fail-Safe Starvation-Free Durable Priority Queues in Redis

Jesse Willett, ProsperWorks, Inc.

ProsperWorks CRM is a cloud-based CRM that integrates with Gmail, Google Drive and the rest of the Google Apps suite. We have growing fleet of asynchronous job types. Many of these touch systems which are slow, are close to their scale limit, have sensitive rate limits, or exhibit frequent transient failures. Moreover our compute containers are terminated at arbitrary times. In most cases we don't know these limits in advance, or these limits change over time. We address these problems with a variety of strategies which will be covered. At the heart of these techniques are "Icks": priority queues with write-folding and two-phase commit implemented in Redis/Lua. Icks are not unlike the BRPOPLPUSH pattern but they offer additional non-starvation guarantees and operational flexibility.

Integrating Redis with ElasticSearch to Get the Best Out of Both Technologies

Dmitry Polyakovsky, Zumobi
  • Using new Redis Streams and RediSearch to work with time series data
  • Moving data between Redis and ElasticSearch
  • Building rich dashboards to visualize data and doing ad hoc analysis
  • Sharing practical solutions based on personal experience and real world problems

Application of Redis in IOT Edge Devices

Glenn Edgar, Lacima Ranch

LaCima Ranch is an Avocado Ranch in Riverside California. The ranch had a limited amount of manual resources, so we spent the last 4 years developing automation systems which we made open source. By accident about 3 years ago, we discovered Redis and incorporated it into our control system. Initially, we wanted to replace SQLite, because of the need for multiprocess operation. The target device was 32 bit Arm processor with 1 G RAM of memory. Redis operated satisfactorily in this environment. Over the years, many the use of Redis evolved into the following functions. 1. Main Primary Store 2. Event Bus 3. Time Series Data Base 4. Graphical Data 5. Search Engine for Log Files Over the years the control system was refactored. What emerged was a core Redis based system, which served as an "apartment complex" for SCADA or control type applications. We found is that most of the functions that an online business uses to monitor its data centers could be replicated on the IOT edge devices. Just as for online businesses, the Redis based "apartment complex" reduces work deploying IOT applications. The objective of this presentation is two-fold. 1. Spur the development of Redis based applications on IOT edge devices. 2. Spur the discussion of how to efficiently interact with Redis database on the IOT device and the cloud. This is an area where we would like inputs The objective of this talk is to promote the development of Redis at the edge device of an IOT device.

Video Experience Operational Insights in Real Time

Aditya Vaidya, Oath Inc

The Video Lifecycle Platform of Verizon Digital Media Service (VDMS) offers solution for content providers to prepare, deliver and monetize live, linear or on demand video content for best in class performance worldwide. Streaming flawless video content to large audiences is challenging especially considering the event being streamed is a global live event like the NFL game. Anything can go wrong during the event and viewers will not tolerate any disruptions. Basically, you have only one chance to get it right and hence it is absolutely critical to have real time operational insights into the quality of video stream so that the resolution time for any video stream issues can be shortened thereby minimizing the disruption time. VDMS’s Real Time Operational Insights Platform gathers data related to quality of video experience as well as audience engagement in real time from video players.

Gathering quality of video experience metrics like buffering performance of different CDNs (Content Delivery Networks) in our multi CDN delivery ecosystem provides the ability to redistribute traffic to another CDN in real time in case the performance of one of the CDN causes increase in buffering. On the other hand, gathering audience engagement metrics such as concurrent video session views on partner properties like yahoo sports, aol etc allows our customers to get an estimate in real time about the scale of audience engagement experienced during a game. This paper describes how Redis powers VDMS’ Real Time Operational Insights Platform, which gathers video playback statistics for millions of concurrent viewers, and aggregates and delivers relevant metrics for multiple dimensions with an end-to-end latency of 20 seconds. We also see how redis with its probabilistic data structure aka HyperLogLog helped us calculate unique concurrents in real time over millions of video sessions using very limited memory.

Redis at LINE, 25 Billion Messages Per Day

Jongyeol Choi, LINE+ Corporation

LINE Messenger is popular in Japan, Taiwan, Thailand, Indonesia and many other countries. There are more than 160 million monthly active users around the world. We usually deliver 25 billion messages per day. We highly depend on Redis, HBase, and Kafka to provide a fast and reliable messaging system. Regarding the usage of Redis in messaging, we use more than 60 clusters and more than 10,000 Redis server processes on over 1,000 high-performance physical machines. Many of the clusters are not only cache but primary storage clusters. This presentation will focus on how we use Redis with other storage systems for messaging and how we develop and operate internal Redis-related facilities to handle high traffic. I will talk about how we created a client-based sharding system without proxy, as well as an internal monitoring system, and how we adopted asynchronous Redis client to the production system. I will also talk about our recent experience with the official Redis Cluster. The main goal of this talk is to show how we develop and operate Redis facilities for a messaging system.

Managing 300 Billion API Calls / Month with Redis

Iddo Gino, RapidAPI

RapidAPI serves over 300B API calls / month, picking at over 1 Million calls per second. For each call, RapidAPI performs several steps, including authenticating the API key & API subscription, and logging the request to perform analytics & billing. Being a proxy to APIs provided by Fortune 500 companies, RapidAPI has to keep the latency of those operations to single milliseconds. RapidAPI uses Redis (on RedisLabs) as an analytics DB, streaming in writes to log API requests (including information like latency, quotas used and response codes). The data is later used to present real-time API usage analytics and generate invoices for API consumption. Our billing engine uses subscription information found in our main (SQL) data base, combined with Redis analytics to generate invoices. Generating a single invoice usually involves touching 10,000-100,000 Redis keys, and thus we use a special query language to query the SQL & Redis database together to optimize those reads. During the talk, we’ll present: - Benchmark we did with multiple time-series databases (streaming 10k-20k writes / second) to explain why we chose Redis - The schema we use to store time series data aggregations in Redis, optimized for fast writes - How our real time dashboard pull changes from Redis - Our billing engine, using a special query language which ‘JOIN’s data between our SQL and Redis databases (Open source).

Migrating from Coherence to Redis

Vivek Rajput, RCI
Balaji Mariyappan, Solutions Engineer, RCI web application heavily uses caching for services. Currently we are using Oracle Coherence for caching. We have decided to migrate to an open source technology to save cost, reduce complexity and improve performance. On our research Redis platform topped over other technologies in meeting our requirement. This paper will explain how we architected our new system and what KPI's we measured during the migration process.

From Twitter to Redis Streaming Data

Jean Winget, Wgdesign

Mapping disaster relief scenarios requires accurate information from multiple source. Twitter feeds combined with the Redis streaming data is a new approach to delver data to the mapping site and first responders.

The Versatile of Redis: Powering Our Critical Business Using Redis

Luciano Sabenca, Movile
Eiti Kimura, Architect, Movile

Step by Step, Redis became one of our key tools for our critical business. The main advantage we found on Redis were it’s versatile and performance: we use Redis as cache, database and distributed lock. With Redis we now have much more resilient systems with a very high throughput. We will talk on how we use Redis to store information about more than 160 millions of phones with sub-milliseconds latency. We will also show how we use redis to do lock with load balancing between the clients. Using Redis inside our architecture we enhanced the client lead’s purchase and saved some money in media campaigns.

A Recommendation Engine For The Faint Hearted: Using Redis Sorted Sets And Set Theory

Kinane Domloje, Vinelab

A first dive into recommendation engines is a daunting challenge, where to start from? Which tools to use? Is real time recommendations possible? Which recommendation algorithm to follow? How fast the recommendations can be served? Are some of the questions that pops to mind. Answering all the above requires research and a deep understanding of the underlying mechanisms. Considering the steep learning curve that Redis offer, using Redis as a recommendation engine will render the technology accessible to all. In this talk we intend to showcase how a recommendation engine can easily be implemented using Redis, sorted sets and set theory; taking advantage of in-memory computation, a single data type and basic understanding of math. Our proposed algorithm is agnostic to the recommendation algorithm business decide on, be it content based or collaborative filtering, with the right data model at hand our algorithm is replicable. With a content based recommendation engine in mind, due to its simplicity and adequacy to convey our message, we will focus on: 1- The design of a data model that fits the purpose. 2- The application of set theory principles. 3- The use of intermediary, temporary, sorted sets to chain operations. 4- The use of dynamic parameters to tweak recommendations.

Ad Serving Platform Using Redis

Sankalp Jonna, GreedyGame

User-based Targeting is the core feature of an ad network company. Modern ad network needs complex targeting with advanced inclusion and exclusion rule set based on attributes like location, game, OS, device, etc. For high performance targeting architecture should able to perform quick set operations. It needs faster DB with support for core set operations natively. Redis, the popular open source, in-memory database is known for its in-memory set operations capability. It helps us serve over 200M ad requests monthly with lesser computing, only three 2-core 8GM ram, infrastructure. It takes about 10 milliseconds to serve each ad request and only 500 microseconds for individual set operations in Redis. This paper outlines the algorithm and code necessary to implement campaign targeting based on core targeting parameters.

Serving Automated Home Valuation with Redis and Kafka

Jiaqi Wang, Redfin

As a technology powered real estate brokerage, one of the most important questions Redfin can answer for homeowners across the entire country is how much is my home worth, a.k.a. the Redfin Estimate. To make it a service that supports various use cases on web, mobile app, email, and partner products, we leverage Redis’ high performance as a caching layer and Kafka stream processing as a warming/invalidation mechanism to decrease the service response time while maintaining high availability and consistency.

Redis Fault-Injection

Khalid Lafi, Misk Innovation Lab

Redis from the point-of-view of the user is a simple and an easy to use system. Consequently, Redis became an essential building block in an increasing number of complex and mission-critical systems. Having built a system that relies on Redis to store its transient state. Khalid realized that to have full confidence in my system, he needed to understand its behavior when Redis starts failing. So he created an abstraction on top of the client library they were using, which gave him hooks to inject failures in almost all parts that touch Redis. Which helped him find a number of critical bugs, that would've otherwise taken days to debug in prod. Seeing how useful this technique is, Khalid created RedFI, an open source fault-injection Redis proxy. He decided to make it a proxy/sidecar, so it would be language-agnostic, to help most of the community.

Techniques for Synchronizing In-Memory Caches with Redis

Ben Malec, Paylocity

For some highly-accessed value, a network roundtrip incurs too much latency. An obvious solution would be adding an in-memory caching layer, but that brings many challenges around keeping data in sync across multiple clients. This presentation will detail the approach Paylocity implemented, which leverages Redis Pub/Sub, bucketing keys to minimize synchronization message length, and carefully exploiting order-of-operation to eliminate the need for a master synchronization clock.

Implementing a New Data Structure for Redis as an Inexperienced C Programmer

Loris Cro

I have little experience with C programming and yet I wrote and published redis-cuckoofilter, a Redis module that implements Cuckoo filters.

Starting from the original paper that presents the data structure, I worked my way up to a functioning module. While the Redis Module ecosystem is new and still subject to change, the APIs are already nice enough to build something upon, and have a good time while doing it, even as a programmer that doesn't have extensive knowledge of the C ecosystem.

This talk is an overview of my journey developing the aforementioned module and the main things that I learnt from that experience, which include: being smart about testing, thinking about API design choices, and more.

My Other Car is a Redis Cluster

Ramon Snir, Dynamic Yield

Dynamic Yield doesn't see itself as a "Redis shop", but that hasn't stopped it from adopting Redis for dozens of use cases. Over three years we reached tens of thousands of transaction per second and amassed over a terabyte of unique data (non-cache) in our Redis databases. We see Redis as an integral "second database" in our architecture: it allows us to bring in extra real-time and low-latency properties to our machine learning and personalization systems. Being able to adapt to new requirements without rearchitecturing and being able to scale both throughput and sizes with Redis Cluster, Redis Enterprise and Redis Enterprise on Flash make Redis a common part of most of our projects.

Event-Driven Microservices with Event Sourcing and Redis

Tom Mooney,

When building a microservices based system, one of the core problems that must be addressed is how to share data between services while maintaining availability and consistency. Using an event-driven architecture allows services to remain highly decoupled, while allowing data to be used in new and novel ways in the future that cannot be anticipated now, without the need to make sweeping changes to the existing system. Event Sourcing is a pattern for implementing an event-driven system that offers atomic operations in a distributed system without the need for expensive and complicated distributed transactions. Redis is a great technology for implementing Event Sourcing because it is already in use in many organizations, offers outstanding performance, and has built in pub/sub functionality. This presentation will focus on some of the key concerns with using Redis as a persistent event store and event bus. It will provide a brief overview of the Event Sourcing pattern, and how Redis can be used as a primary data store for persisting event data when implementing this pattern. It will also include discussion of how to use the Redis pub/sub capability and Redis data structures to create a reliable queue that will allow data to be streamed to multiple consumers asynchronously while minimizing the possibility of data loss.

Your First Microservice with Kubernetes, Helm Charts, and Redis

Dan Garfield, Codefresh

We'll go through when it's right to split up microservices and leverage Kubernetes, Helm, and Redis.

Global Redis Clusters, a Romantic Comedy

Kurt Mackey,

How we manage state across 15 datacenters and expose distributed services to customers via a javascript api.

Designing a Redis Client for Humans

Marc Gravell, Stack Overflow

The Redis client story is generally pretty healthy, with a wide range of client libraries available covering almost every mainstream platform/language, and a few besides - but the features and usability and vary widely - especially with the changing demands of more recent features like Cluster (3.0) and Modules (4.0). In this session Marc will draw on his experience with developing multiple Redis libraries and tools to discuss the good, the bad, the ugly and the hopeful future of the Redis client experience.

Transforming Vulnerability Telemetry with Redis Enterprise

Darren Chinen, Malwarebytes

Malwarebytes, the industry-leading anti-malware and internet security software company,also leads in innovative security tools. One such tool is the interactive remediation map, a worldwide security vulnerability tool that casts an eye on malwares with real-world scans. Join us to learn how Redis Enterprise helps us visualize data in real-time, provides global information on threats detected, how a virus is conceived, and the eventual spread through the world. In this talk, we will illustrate how we use Redis Enterprise to drive analytics for heat maps and for the customer-facing OMNI portal. We will also describe our use of Redis Enterprise for fast customer data ingestion, session management, time-series analysis and predictive analytics.

Using a Redis Backend in a Serverless Application with Kubeless

Sebastien Goasguen, Bitnami

Kubernetes which has emerged has the de-facto standard for containerized applications is a perfect system to build a serverless solution on top. In this talk will introduce Kubernetes quickly by showing how one can deploy a Redis cluster using the Helm package manager and then we will show how Kubeless, a serverless solution can be used to build a frontend API that uses the deployed Redis cluster as backend. The talk will highlight how Redis is deployed in Kubernetes and how it can be used quickly using functions.

Redis Dev, Test, and Production with OpenShift Service Catalog

Ryan Jarvinen, Red Hat

Learn how Red Hat uses Kubernetes and the Open Service Broker to deliver customized instances of Redis, adapted to meet the unique needs and use-cases present in Dev, Test, and Production environments. This talk includes a short overview of the Open Service Broker and related technologies, before moving on to demos and example code. Come see the workflow benefits that are possible when using a container-based PaaS to automate the configuration, delivery, and management of your Redis services.

Scaling with Whitepages Global Identity Graph Dataset Without Sacrificing on Latency

Heather Wade, Varun Kumar, Jason Frazier, Whitepages, Inc.

Whitepages Inc. is the global leader in digital identity verification for businesses and consumers. They service consumers on their open search web properties that see over 55 million visitors accessing North America contact information and public records and provides businesses with global enterprise-scale APIs and web tools to fight fraud and identify good customers at a million plus transactions daily. To best service their expansive user and customer base, Whitepages developed its own fully-integrated, high-availability Identity Graph database which houses over 5 billion global identity records as a connected graph. When it came time to evaluate databases to cache their online contact records, Redis was a top contender for the speed and scale capabilities. Join Whitepages to learn how Redis on Flash helps them deal with their already large, and ever-growing, amounts of data they store - without sacrificing on response time and high throughput; how simple and easy it was for them to integrate Redis on Flash, and how Redis Enterprise performed better than others with less infrastructure.

Building Lightweight Microservices Using Redis

Carlos Justiniano, Chief Software Architect, Flywheel Sports

Building Microservices can be a daunting experience. Small teams often struggle with the dizzying array of tools required to build, test and maintain services. In this talk, we'll examine how to address key microservice concerns using Redis as one of the few external infrastructure dependencies! This talk is for individuals who have started or are about to embark on a journey to build Microservices and are wondering how Redis can help. Two years ago we set out to embrace Microservices. When we started, we had a small team of developers with experience building Node / ExpressJS applications using NoSQL databases and Redis. What we collectively lacked was experience with the overwhelming array of tools used to build and deploy microservices in the cloud. Tools such as etcd, consul, and zookeeper for service discovery, and Docker, Kubernetes for containerization and orchestration. The issue we faced was slowing feature development in order to ramp up on the necessary tools OR "right-size" a solution to meet our needs. We chose the latter and set out to create a small microservices library which wouldn't be overwhelming for Node and ExpressJS developers. Our efforts resulted in two libraries and a few supporting tools. At the core of our microservice solution is Redis. By building our Hydra ( library using Redis, we were able to address the following key microservice concerns. * Service discovery * Presence management * Interservice messaging over HTTP REST and/or Redis pub/sub * Built-in load balancing and routing * Service-level message queues * Near zero configuration Fast forward to today, our use of Redis and our Hydra Microservices library powers our Live video streaming fitness classes via our nationwide FlyAnywhere platform. Full proposal here.

Making Real-Time Predictive Decisions with Redis-ML

Tague Griffith, Head of Developer Advocacy, Redis Labs

Most of the energy and attention in machine learning focused on the model training side of the problem. Multiple frameworks, in every language, provide developers with access to a host of data manipulation and training algorithms, but until recently developers had virtually no frameworks to build out predictive engines from trained ML models. Most developers resorted to building custom applications, but building highly available, highly performant applications is difficult. Redis in conjunction with the Redis-ML module provides a server framework for developers to build predictive engines with familiar, off-the-shelf components. Developers can take advantage of all the features of Redis to deliver faster and more reliable prediction engines with less custom development. This talk is a technical session which examines how Redis can be used in conjunction with a variety of training platforms to deliver real-time predictive and decision making features as part of a larger system. To set the context for the session, we start with an introduction to the Redis data model and how features of Redis (namespace, replication) can be used to build fast predictive engines (at scale), that are more reliable, more feature rich and easier to manage than custom applications. From there, we look at the model serving capabilities of Redis-ML and how they can be integrated with an ML pipeline to automate the entire model development process from training to deployment. The session ends with a demonstration of a simple machine learning pipeline. Using scikit-learn we train several example models, load them directly into Redis and demonstrate Redis as the predictive engine for making real-time recommendations.

Faster Query and Search with Redis Query (RQL)

Dvir Volk, Senior Architect, Redis Labs

This session will introduce users to the new Redis Query (RQL) language and how they can employ it in their applications.

CRDTs and Redis—From Sequential to Concurrent Executions

Carlos Baquero, Assistant Professor, Universidade do Minho

With CRDB, Redis now supports geo-replication and allows each replica site to update the system with no immediate coordination, leading to very high availability and low operation latency. In other words, it supports multi-master replication (aka active-active). From the user application perspective, the system cannot be seen anymore as a single sequential copy, since now operations can be processed concurrently at different locations. Conflict-free Replicated Data Types (CRDTs) can take away a lot of the complexity when migrating from a sequential to a concurrent setting. In this talk we will explore a bit of the path in this transition and cover what can be expected in Redis data types operating under active-active deployments. Concurrent behaviour will be explored for cases such as Redis Sets, Strings (both as registers and as counters), and other types.

Building Active-Active Geo-Distributed Applications with the New Redis CRDTs

Cihan Biyikoglu, VP of Product Management, Redis Labs

Mission critical applications look for geo-redundancy for protection against regional failures from WAN connectivity issues to colossal natural disasters. Research around CRDTs outline safe, lightweight and predictable mathematical models for handling active-active geo distributed database applications that can obtain strong eventual consistency. In this talk, we will look at common CRDT uses for building database applications with high velocity and variety of data. We'll see demos of how Redis's CRDT implementation provide a simple but safe active-active geo-distributed database platform for applications looking for active-active topologies without compromising throughput or latencies.

The Design of Redis Streams

Salvatore Sanfilippo, Creator of Redis

In this session, Salvatore Sanfillipo, the creator of Redis, will discuss the design and implementation of the newest Redis data type - streams.