Skip to content

Latest commit

Β 

History

History
118 lines (75 loc) Β· 3.64 KB

README.md

File metadata and controls

118 lines (75 loc) Β· 3.64 KB

🚦 Load Management System in Go with Redis

Welcome! This project is a load management system built in Go, designed to limit the number of requests to a service within a specified time frame. 🎟️ Initially developed for queue management in ticket sales, its flexible design makes it suitable for a variety of use cases where request rate control is crucial. πŸš€


✨ Key Features

  • βš™οΈ Efficient Request Management:

    • Controls the maximum number of requests allowed within a specific time window.
    • Prevents system overload and ensures smooth user experiences.
  • πŸ“¦ Redis-based State Management:

    • Uses Redis for fast and reliable centralized state storage.
    • Optimized for distributed systems and high-performance workloads.
  • πŸ“ˆ Horizontal Scalability:

    • Designed for seamless scaling in modern architectures like Kubernetes.
    • Supports multiple instances with consistent state sharing via Redis.
  • πŸ› οΈ Fault Tolerance:

    • Built to gracefully handle partial failures without disrupting service.
  • 🌍 Versatility:

    • Useful for a variety of scenarios, including:
      • πŸ›‘οΈ Protecting against brute force attacks.
      • πŸ”— Controlling access to shared resources.
      • ⏳ Managing high-demand queues, such as ticket sales.

πŸ› οΈ Technologies Used

  • πŸ’» Programming Language: Go (Golang).
  • πŸ“Š Database: Redis, serving as the centralized store for state and counters.
  • ☸️ Infrastructure Compatibility: Kubernetes and other container-based environments.

πŸš€ How It Works

  1. πŸ” Request Handling:

    • Each request is identified using an IP address, token, or other unique identifier.
  2. ⏳ Rate Limiting:

    • Verifies how many requests have been made during the configured time frame.
    • Denies excess requests with an HTTP 429: Too Many Requests response.
  3. πŸ”„ Distributed Synchronization:

    • Ensures consistent state sharing across nodes using Redis.
  4. πŸ›‘οΈ Resilience:

    • Handles failures gracefully, including temporary Redis outages or node crashes.

πŸ“¦ Use Cases

  1. 🎟️ Queue Management:

    • Prevents system saturation during high-demand events like ticket sales.
  2. πŸ›‘οΈ Rate Limiting:

    • Protects public and private APIs from abuse by controlling request rates.
  3. πŸ”— Shared Resource Management:

    • Guarantees fair access to resources in distributed systems.

πŸ“– Getting Started

1️⃣ Clone the Repository:

git clone https://github.com/by2waysprojects/queue-processor.git
cd queue-processor

2️⃣ Set Up Redis:

  • Install Redis locally or use a cloud-based solution like Redis Cloud. ☁️
  • Update the Redis connection string in the application configuration file.

3️⃣ Build and Run the Application:

go build -o queue-processor cmd/main.go
./queue-processor

4️⃣ Use Docker:

Prefer containers? Build and run the Docker image! 🐳

docker build -t queue-processor -f build/Dockerfile
docker run -p 8080:8080 queue-processor

5️⃣ Deploy to Kubernetes:

Use the provided Helm chart or manifest files to deploy on Kubernetes. ☸️


🀝 Contributing

We ❀️ contributions! Please check out our Contribution Guidelines for how to submit issues, suggest improvements, or open pull requests.


πŸ“œ License

This project is licensed under the Apache-2.0 License. See the LICENSE file for more details.


For questions or support, feel free to reach out via issues. πŸ’¬