|
| 1 | +# async-cache |
| 2 | +> A high-performance async caching solution for Python |
| 3 | +
|
| 4 | +[](https://pypi.python.org/pypi/async-cache) |
| 5 | +[](https://pepy.tech/project/async-cache) |
| 6 | +[](https://snyk.io/advisor/python/async-cache) |
| 7 | + |
| 8 | +A lightweight, efficient caching solution designed specifically for asyncio applications. Supports both LRU (Least Recently Used) and TTL (Time To Live) caching strategies with a clean, decorator-based API. |
| 9 | + |
| 10 | +## Features |
| 11 | + |
| 12 | +- 🚀 **Async-First**: Built specifically for asyncio applications |
| 13 | +- 🔄 **Multiple Cache Types**: |
| 14 | + - LRU (Least Recently Used) cache |
| 15 | + - TTL (Time To Live) cache |
| 16 | +- 🎯 **Flexible Key Generation**: Works with primitive types, custom objects, and ORM models |
| 17 | +- 🛠 **Configurable**: Adjustable cache size and TTL duration |
| 18 | +- 🧹 **Cache Management**: Clear cache on demand |
| 19 | +- 💡 **Smart Argument Handling**: Skip specific arguments in cache key generation |
| 20 | +- 🔍 **Cache Bypass**: Ability to bypass cache for specific calls |
| 21 | + |
| 22 | +## Installation |
| 23 | + |
| 24 | +```bash |
| 25 | +pip install async-cache |
| 26 | +``` |
| 27 | + |
| 28 | +## Basic Usage |
| 29 | + |
| 30 | +### LRU Cache |
| 31 | + |
| 32 | +The LRU cache maintains a fixed number of items, removing the least recently used item when the cache is full. |
| 33 | + |
| 34 | +```python |
| 35 | +from cache import AsyncLRU |
| 36 | + |
| 37 | +@AsyncLRU(maxsize=128) |
| 38 | +async def get_user_data(user_id: int) -> dict: |
| 39 | + # Expensive database operation |
| 40 | + data = await db.fetch_user(user_id) |
| 41 | + return data |
| 42 | +``` |
| 43 | + |
| 44 | +### TTL Cache |
| 45 | + |
| 46 | +The TTL cache automatically expires entries after a specified time period. |
| 47 | + |
| 48 | +```python |
| 49 | +from cache import AsyncTTL |
| 50 | + |
| 51 | +@AsyncTTL(time_to_live=60, maxsize=1024) |
| 52 | +async def get_weather(city: str) -> dict: |
| 53 | + # External API call |
| 54 | + weather = await weather_api.get_data(city) |
| 55 | + return weather |
| 56 | +``` |
| 57 | + |
| 58 | +## Advanced Usage |
| 59 | + |
| 60 | +### Working with Custom Objects |
| 61 | + |
| 62 | +The cache works seamlessly with custom objects and ORM models: |
| 63 | + |
| 64 | +```python |
| 65 | +from dataclasses import dataclass |
| 66 | +from cache import AsyncLRU |
| 67 | + |
| 68 | +@dataclass |
| 69 | +class UserFilter: |
| 70 | + age: int |
| 71 | + country: str |
| 72 | + status: str |
| 73 | + |
| 74 | +@AsyncLRU(maxsize=128) |
| 75 | +async def filter_users(filter_params: UserFilter) -> list: |
| 76 | + # Complex filtering operation |
| 77 | + users = await db.filter_users( |
| 78 | + age=filter_params.age, |
| 79 | + country=filter_params.country, |
| 80 | + status=filter_params.status |
| 81 | + ) |
| 82 | + return users |
| 83 | +``` |
| 84 | + |
| 85 | +### Skipping Arguments |
| 86 | + |
| 87 | +Useful for methods where certain arguments shouldn't affect the cache key: |
| 88 | + |
| 89 | +```python |
| 90 | +from cache import AsyncTTL |
| 91 | + |
| 92 | +class UserService: |
| 93 | + @AsyncTTL(time_to_live=300, maxsize=1000, skip_args=1) |
| 94 | + async def get_user_preferences(self, user_id: int) -> dict: |
| 95 | + # 'self' is skipped in cache key generation |
| 96 | + return await self.db.get_preferences(user_id) |
| 97 | +``` |
| 98 | + |
| 99 | +### Cache Management |
| 100 | + |
| 101 | +#### Bypassing Cache |
| 102 | + |
| 103 | +```python |
| 104 | +# Normal cached call |
| 105 | +result = await get_user_data(123) |
| 106 | + |
| 107 | +# Force fresh data |
| 108 | +fresh_result = await get_user_data(123, use_cache=False) |
| 109 | +``` |
| 110 | + |
| 111 | +#### Clearing Cache |
| 112 | + |
| 113 | +```python |
| 114 | +# Clear the entire cache |
| 115 | +get_user_data.cache_clear() |
| 116 | +``` |
| 117 | + |
| 118 | +## Performance Considerations |
| 119 | + |
| 120 | +- The LRU cache is ideal for frequently accessed data with no expiration requirements |
| 121 | +- The TTL cache is perfect for data that becomes stale after a certain period |
| 122 | +- Choose `maxsize` based on your memory constraints and data access patterns |
| 123 | +- Consider using `skip_args` when caching class methods to avoid instance-specific caching |
| 124 | + |
| 125 | +## Contributing |
| 126 | + |
| 127 | +Contributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change. |
| 128 | + |
| 129 | +## License |
| 130 | + |
| 131 | +This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details. |
0 commit comments