Redis seems to be everywhere these days.
I use it all the time myself. But recently, I stumbled upon some fascinating features that I hadn’t noticed before.
Surprise #1: ConnectionPool & BlockingConnectionPool
Here’s a simple example using the asynchronous version of the Redis library:
import asyncio
from redis.asyncio import Redis
async def ping_redis(redis_client: Redis):
return await redis_client.ping()
async def main():
client = Redis(host='localhost', port=6379, db=0)
print(await ping_redis(client))
if __name__ == '__main__':
asyncio.run(main())
Each command we execute requests a connection to the Redis server.
Since we didn’t explicitly create a ConnectionPool
, it’s created automatically for us:
print(client.connection_pool.max_connections)
# Output:
# 2147483648 (this is 2**31)
Now let’s modify our main()
function. Suppose we receive 200 incoming requests, each requiring interaction with Redis:
async def main():
client = Redis(host='localhost', port=6379, db=0)
tasks = [ping_redis(client) for _ in range(200)]
results = await asyncio.gather(*tasks)
print(len(client.connection_pool._available_connections))
After execution, we’ll observe that _available_connections == 200
, meaning a new connection was created for each request.
By default, a single Redis server supports up to 10,000 simultaneous connections, far fewer than the default connection pool size.
This could lead to a DoS (Denial of Service) scenario if no measures are taken.
To avoid exhausting all available server connections, we can use a connection pool that reuses existing connections and limits their total count to something more realistic.
Here’s how we can explicitly set max_connections
when creating a Redis instance:
async def main():
client = Redis(host='localhost', port=6379, db=0, max_connections=10)
tasks = [ping_redis(client) for _ in range(200)]
results = await asyncio.gather(*tasks)
print(len(client.connection_pool._available_connections))
Running this results in:
redis.exceptions.ConnectionError: Too many connections
By default, ConnectionPool
raises an exception when no connections are available. This isn’t ideal — I’d prefer coroutines to wait for connections to become available.
For this behavior, we can use BlockingConnectionPool
:
from redis.asyncio import Redis, BlockingConnectionPool
async def main():
pool = BlockingConnectionPool(host='localhost', port=6379, db=0, max_connections=10)
client = Redis(connection_pool=pool)
tasks = [ping_redis(client) for _ in range(200)]
results = await asyncio.gather(*tasks)
print(len(client.connection_pool._available_connections))
This runs successfully, with _available_connections == 10
.
Success! 🎉
Surprise # 2: Connections leak & Closing Resources
Here’s another surprising issue: after running the application for a long time, you might encounter an error saying it’s impossible to connect to Redis.
This happens because the connection pool isn’t closed when the Redis client is closed.
pool = ConnectionPool()
redis = Redis(connection_pool=pool)
await redis.aclose()
# The pool remains open, and connections might leak unless explicitly closed.
The idea behind this design is to allow reusing the same connection pool across different parts of an application.
The confusing part for me was that if you create a Redis instance with both connection_pool
and auto_close_connection_pool=True
, the latter is ignored:
pool = ConnectionPool()
redis = Redis(connection_pool=pool, auto_close_connection_pool=True)
# auto_close_connection_pool is ignored because you provided the pool manually.
# The library assumes you’re responsible for closing the pool.
This behavior might seem unintuitive — and you’re right!
Thankfully, in recent versions of redis-py
, the auto_close_connection_pool
parameter has been marked as deprecated.
The Right Way to Close Resources
You must manually close the connection pool:
pool = ConnectionPool()
redis = Redis(connection_pool=pool)
...
await redis.aclose()
await pool.close()
Alternatively, use the new .from_pool()
method:
pool = ConnectionPool()
redis = Redis.from_pool(pool)
...
await redis.aclose()
# The pool is now also closed.
Personally I prefer to manually call close on the pool just to be sure.
If you're using Sentinel - there are `ConnectionPool` instances by default, so make sure to experiment on how to open and close pools and connections.