Skip to content

Commit

Permalink
add pq data structure, update cheatsheet
Browse files Browse the repository at this point in the history
  • Loading branch information
yennanliu committed Nov 7, 2023
1 parent acea74c commit e10700e
Show file tree
Hide file tree
Showing 21 changed files with 1,424 additions and 9 deletions.
17 changes: 9 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -146,18 +146,19 @@
## Data Structure
| # | Title | Solution | Use case | Comment | Status|
| --- | ----- | -------- | ---- | ----- | ---- |
||Queue| [Python ](./data_structure/python/queue2.py), [JS](./data_structure/js/queue.js) | | | AGAIN*|
||Stack| [Python ](./data_structure/python/stack.py), [JS stack (via linkedlist)](./data_structure/js/stack_linkedlist.js), [JS - stack (via array)](./data_structure/js/stack_array.js) | | | OK|
||LinkedList| [Python](./data_structure/python/linkedList.py), [JS](./data_structure/js/linkedlist.js), [Java](./data_structure/java/LinkedinList.java) | | | OK**|
||Queue| [Py](./data_structure/python/queue2.py), [JS](./data_structure/js/queue.js) | | | AGAIN*|
||Stack| [Py](./data_structure/python/stack.py), [JS (linkedlist)](./data_structure/js/stack_linkedlist.js), [JS (array)](./data_structure/js/stack_array.js) | | | OK|
||LinkedList| [Py](./data_structure/python/linkedList.py), [JS](./data_structure/js/linkedlist.js), [Java](./data_structure/java/LinkedinList.java) | | | OK**|
||Doubly LinkedList| [Python](./data_structure/python/doublylinkedlist.py), [JS](./data_structure/js/doublylinkedList.js) | | | AGAIN|
||Tree| [Python ](./data_structure/python/tree.py) | | | AGAIN**|
||Trie| [Python ](./data_structure/python/trie.py) | | | AGAIN|
||Tree| [Py](./data_structure/python/tree.py) | | | AGAIN**|
||Trie| [Py](./data_structure/python/trie.py) | | | AGAIN|
||Heap| [heap.py](./data_structure/python/heap.py), [MinHeap.py](./data_structure/python/MinHeap.py), [MaxHeap.py](./data_structure/python/MaxHeap.py), [MinHeap.java](./leetcode_java/src/main/java/AlgorithmJava/MinHeap.java), [MaxHeap.java](./leetcode_java/src/main/java/AlgorithmJava/MaxHeap.java) | | | AGAIN|
||Array| [Python ](./data_structure/python/array.py) | | | AGAIN*|
||Graph| [Python ](./data_structure/python/graph.py), [JS](./data_structure/js/graph.js). [Java-graph](./algorithm/java/Graph.java), [Java-graph-client](./algorithm/java/GraphClient.java) | | | OK***|
||Array| [Py](./data_structure/python/array.py) | | | AGAIN*|
||Graph| [Py](./data_structure/python/graph.py), [JS](./data_structure/js/graph.js). [Java1](./algorithm/java/Graph.java), [Java2](./algorithm/java/GraphClient.java) | | | OK***|
||Binary search Tree (BST)| [Python](./data_structure/python/binary_search_tree.py), [JS](./data_structure/js/binary_search_tree.js), [Scala](./data_structure/scala/binarySearch.scala), [Java](./data_structure/java/BST.java) | | | AGAIN|
||Hash table| [Python](./data_structure/python/hash_table.py), [JS](./data_structure/js/hash_table.js) | usually for improving `time complexity B(O)` via extra space complexity (time-space tradeoff)|`good basic`| AGAIN****|
||Hash table| [Py](./data_structure/python/hash_table.py), [JS](./data_structure/js/hash_table.js) | usually for improving `time complexity B(O)` via extra space complexity (time-space tradeoff)|`good basic`| AGAIN****|
||DirectedEdge| [Java](./data_structure/java/DirectedEdge.java) | | | AGAIN|
||Priority Queue (PQ)| [Py 1](./data_structure/python/pq_1.py), [Py 2](./data_structure/python/pq_2.py), [Py 3](./data_structure/python/pq_3.py) | | | AGAIN|


## Algorithm
Expand Down
251 changes: 251 additions & 0 deletions data_structure/python/materials-queue/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,251 @@
# Python Stacks, Queues, and Priority Queues in Practice

Sample code supplementing the tutorial on [Python queues](https://realpython.com/queue-in-python/) hosted on Real Python.

## Installation

To get started, create and activate a new virtual environment, and then install the required dependencies into it:

```shell
$ python3 -m venv venv/ --prompt=queue
$ source venv/bin/activate
(queue) $ python -m pip install -r requirements.txt -c constraints.txt
```

## Usage

### Queue Implementation

Change directory to `src/` and run the interactive Python interpreter:

```shell
(queue) $ cd src/
(queue) $ python -q
```

Then, import various queue data types from the `queues` module and start using them:

```python
>>> from queues import Queue, Stack, PriorityQueue

>>> fifo, stack, heap = Queue(), Stack(), PriorityQueue()
>>> for priority, element in enumerate(["1st", "2nd", "3rd"]):
... fifo.enqueue(element)
... stack.enqueue(element)
... heap.enqueue_with_priority(priority, element)

>>> for elements in zip(fifo, stack, heap):
... print(elements)
...
('1st', '3rd', '3rd')
('2nd', '2nd', '2nd')
('3rd', '1st', '1st')
```

### Graph Algorithms

Change directory to `src/` and run the interactive Python interpreter:

```shell
(queue) $ cd src/
(queue) $ python -q
```

Then, import various `graph` module members and start using them:

```python
>>> from graph import *

>>> nodes, graph = load_graph("roadmap.dot", City.from_dict)

>>> city1 = nodes["london"]
>>> city2 = nodes["edinburgh"]

>>> def distance(weights):
... return float(weights["distance"])

>>> for city in dijkstra_shortest_path(graph, city1, city2, distance):
... print(city.name)
...
City of London
St Albans
Coventry
Birmingham
Stoke-on-Trent
Manchester
Salford
Preston
Lancaster
Carlisle
Edinburgh

>>> for city in shortest_path(graph, city1, city2):
... print(city.name)
...
City of London
Bristol
Newport
St Asaph
Liverpool
Preston
Lancaster
Carlisle
Edinburgh

>>> connected(graph, city1, city2)
True

>>> def is_twentieth_century(city):
... return city.year and 1901 <= city.year <= 2000

>>> breadth_first_search(graph, city2, is_twentieth_century)
City(
name='Lancaster',
country='England',
year=1937,
latitude=54.047,
longitude=-2.801
)

>>> depth_first_search(graph, city2, is_twentieth_century)
City(
name='Lancaster',
country='England',
year=1937,
latitude=54.047,
longitude=-2.801
)
```

### Thread-Safe Queues

Change directory to `src/` and run the script with optional parameters. For example:

```shell
(queue) $ cd src/
(queue) $ python thread_safe_queues.py --queue fifo \
--producers 3 \
--consumers 2 \
--producer-speed 1 \
--consumer-speed 1
```

**Parameters:**

| Short Name | Long Name | Value |
|-----------:|-------------------:|------------------------|
| `-q` | `--queue` | `fifo`, `lifo`, `heap` |
| `-p` | `--producers` | number |
| `-c` | `--consumers` | number |
| `-ps` | `--producer-speed` | number |
| `-cs` | `--consumer-speed` | number |

### Asynchronous Queues

Change directory to `src/` and run the script with a mandatory URL and optional parameters:

```shell
(queue) $ cd src/
(queue) $ python async_queues.py http://localhost:8000/ --max-depth 2 \
--num-workers 3
```

**Parameters:**

| Short Name | Long Name | Value |
|-----------:|----------------:|--------|
| `-d` | `--max-depth` | number |
| `-w` | `--num-workers` | number |

Note that to change between the available queue types, you'll need to edit your `main()` coroutine function:

```python
# async_queues.py

# ...

async def main(args):
session = aiohttp.ClientSession()
try:
links = Counter()
queue = asyncio.Queue()
# queue = asyncio.LifoQueue()
# queue = asyncio.PriorityQueue()

# ...
```

### Multiprocessing Queue

Change directory to `src/` and run the script with a mandatory MD5 hash value and optional parameters:

```shell
(queue) $ cd src/
(queue) $ python async_queues.py a9d1cbf71942327e98b40cf5ef38a960 -m 6 -w 4
```

**Parameters:**

| Short Name | Long Name | Value |
|-----------:|----------------:|--------|
| `-m` | `--max-length` | number |
| `-w` | `--num-workers` | number |

The maximum length determines the maximum number of characters in a text to guess. If you skip the number of workers, then the script will create as many of them as the number of CPU cores detected.

### Message Brokers

#### RabbitMQ

Start a RabbitMQ broker with Docker:

```shell
$ docker run -it --rm --name rabbitmq -p 5672:5672 rabbitmq
```

Open separate terminal windows, activate your virtual environment, change directory to `message_brokers/rabbitmq/`, and run your producer and consumer scripts:

```shell
(queue) $ cd message_brokers/rabbitmq/
(queue) $ python producer.py
(queue) $ python consumer.py
```

You can have as many producers and consumers as you like.

#### Redis

Start a Redis server with Docker:

```shell
$ docker run -it --rm --name redis -p 6379:6379 redis
```

Open separate terminal windows, activate your virtual environment, change directory to `message_brokers/redis/`, and run your publisher and subscriber scripts:

```shell
(queue) $ cd message_brokers/redis/
(queue) $ python publisher.py
(queue) $ python subscriber.py
```

You can have as many publishers and subscribers as you like.

#### Apache Kafka

Change directory to `message_brokers/kafka/` and start an Apache Kafka cluster with Docker Compose:

```shell
$ cd message_brokers/kafka/
$ docker-compose up
```

Open separate terminal windows, activate your virtual environment, change directory to `message_brokers/kafka/`, and run your producer and consumer scripts:

```shell
(queue) $ cd message_brokers/kafka/
(queue) $ python producer.py
(queue) $ python consumer.py
```

You can have as many producers and consumers as you like.
24 changes: 24 additions & 0 deletions data_structure/python/materials-queue/constraints.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
aiohttp==3.8.1
aiosignal==1.2.0
async-timeout==4.0.2
attrs==21.4.0
beautifulsoup4==4.11.1
charset-normalizer==2.1.0
commonmark==0.9.1
Deprecated==1.2.13
frozenlist==1.3.0
idna==3.3
kafka-python3==3.0.0
multidict==6.0.2
networkx==2.8.4
packaging==21.3
pika==1.2.1
pydot==1.4.2
Pygments==2.12.0
pygraphviz==1.9
pyparsing==3.0.9
redis==4.3.3
rich==12.4.4
soupsieve==2.3.2.post1
wrapt==1.14.1
yarl==1.7.2
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# consumer.py

from kafka3 import KafkaConsumer

consumer = KafkaConsumer("datascience")
for record in consumer:
message = record.value.decode("utf-8")
print(f"Got message: {message}")
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# docker-compose.yml

version: "3"
services:
zookeeper:
image: 'bitnami/zookeeper:latest'
ports:
- '2181:2181'
environment:
- ALLOW_ANONYMOUS_LOGIN=yes
kafka:
image: 'bitnami/kafka:latest'
ports:
- '9092:9092'
environment:
- KAFKA_BROKER_ID=1
- KAFKA_CFG_LISTENERS=PLAINTEXT://:9092
- KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://127.0.0.1:9092
- KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
- ALLOW_PLAINTEXT_LISTENER=yes
depends_on:
- zookeeper
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# producer.py

from kafka3 import KafkaProducer

producer = KafkaProducer(bootstrap_servers="localhost:9092")
while True:
message = input("Message: ")
producer.send(topic="datascience", value=message.encode("utf-8"))
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
# consumer.py

import pika

QUEUE_NAME = "mailbox"


def callback(channel, method, properties, body):
message = body.decode("utf-8")
print(f"Got message: {message}")


with pika.BlockingConnection() as connection:
channel = connection.channel()
channel.queue_declare(queue=QUEUE_NAME)
channel.basic_consume(
queue=QUEUE_NAME, auto_ack=True, on_message_callback=callback
)
channel.start_consuming()
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# producer.py

import pika

QUEUE_NAME = "mailbox"

with pika.BlockingConnection() as connection:
channel = connection.channel()
channel.queue_declare(queue=QUEUE_NAME)
while True:
message = input("Message: ")
channel.basic_publish(
exchange="", routing_key=QUEUE_NAME, body=message.encode("utf-8")
)
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
# publisher.py

import redis

with redis.Redis() as client:
while True:
message = input("Message: ")
client.publish("chatroom", message)
Loading

0 comments on commit e10700e

Please sign in to comment.