Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[INLONG-11386][TubeMQ] Use local files to save consumer group offset information (#11395) #1066

Merged
merged 4 commits into from
Oct 24, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file not shown.
75 changes: 34 additions & 41 deletions docs/modules/tubemq/configure_introduction.md

Large diffs are not rendered by default.

Binary file modified docs/modules/tubemq/img/sys_structure.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
8 changes: 3 additions & 5 deletions docs/modules/tubemq/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ title: Overview
sidebar_position: 1
---

## 1 TubeMQ Architecture:
After years of evolution, the TubeMQ cluster is divided into the following 5 parts:
## TubeMQ Architecture:
After years of evolution, the TubeMQ cluster is divided into the following 4 parts:
![](img/sys_structure.png)

- **Portal:** The Portal part responsible for external interaction and maintenance operations, including API and Web.
Expand All @@ -29,9 +29,7 @@ After years of evolution, the TubeMQ cluster is divided into the following 5 par
service supports resetting the precise offset through the client to support the business extract-once consumption.
At the same time, the consumer has launched a new cross-cluster switch-free Consumer client;

- **ZooKeeper:** Responsible for the ZooKeeper part of the offset storage. This part of the function has been weakened to only the persistent storage of the offset. Considering the next multi-node copy function, this module is temporarily reserved;

## 2 Broker File Storage Scheme Improvement:
## Broker File Storage Scheme Improvement:
Systems that use disks as data persistence media are faced with various system performance problems caused by disk problems. The TubeMQ system is no exception, the performance improvement is largely to solve the problem of how to read, write and store message data. In this regard TubeMQ has made many improvements: storage instances is as the smallest Topic data management unit; each storage instance includes a file storage block and a memory cache block; each Topic can be assigned multiple storage instances.

1. **File storage block:** The disk storage solution of TubeMQ is similar to Kafka, but it is not the same, as shown in the following figure: each file storage block is composed of an index file and a data file; the partiton is a logical partition in the data file; each Topic maintains and manages the file storage block separately, the related mechanisms include the aging cycle, the number of partitions, whether it is readable and writable, etc.
Expand Down
51 changes: 24 additions & 27 deletions docs/modules/tubemq/quick_start.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,22 +2,26 @@
title: Quick Start
sidebar_position: 2
---
## 1 Deploy and Start
## Deploy and Start

### 1.1 Configuration Example
### Configuration Example
There're two components in the cluster: **Master** and **Broker**. Master and Broker
can be deployed on the same server or different servers. In this example, we setup our cluster
like this, and all services run on the same node. Zookeeper should be setup in your environment also.

| Role | TCP Port | TLS Port | Web Port | Comment |
| ---- | -------- | -------- | -------- |----------------------------------------------------------------------|
| Master | 8099 | 8199 | 8080 | Meta data is stored in ZooKeeper /tubemq |
| Broker | 8123 | 8124 | 8081 | Message is stored at /stage/msg_data |
| Zookeeper | 2181 | | | Master metadata or Broker offset information are stored at /tubemq |
| Role | TCP Port | TLS Port | Web Port | Comment |
| ---- | -------- | -------- | -------- |-------------------------------------------------------------------------------------------------|
| Master | 8099 | 8199 | 8080 | Meta data is stored in ZooKeeper /tubemq |
| Broker | 8123 | 8124 | 8081 | Message is stored at /stage/msg_data |
| Zookeeper | 2181 | | | Master metadata is stored at /tubemq, this component is not required if meta_bdb is configured. |

### 1.2 Prerequisites
### Prerequisites
- ZooKeeper Cluster

ZooKeeper is not mandatory in the TubeMQ environment. If the Master metadata is stored in BDB, this part of the resource can be omitted.

- TubeMQ installation package deployment

After you extract the package file, here's the folder structure.
```
/INSTALL_PATH/inlong-tubemq-server/
Expand All @@ -28,7 +32,7 @@ After you extract the package file, here's the folder structure.
└── resources
```

### 1.3 Configure Master
### Configure Master
You can change configurations in `conf/master.ini` according to cluster information.
- Master IP and Port
```ini
Expand Down Expand Up @@ -77,7 +81,7 @@ the introduction of availability level.
- It is necessary to ensure the clock synchronization between all master nodes


### 1.4 Configure Broker
### Configure Broker
You can change configurations in `conf/broker.ini` according to cluster information.
- Broker IP and Port
```ini
Expand All @@ -98,14 +102,7 @@ masterAddressList=MASTER_NODE_IP1:8099,MASTER_NODE_IP2:8099 // multi addresses
primaryPath=/stage/msg_data
```

- ZooKeeper Cluster
```ini
[zookeeper] // Master and Broker in the same cluster must use the same zookeeper environment and have the same configuration
zkNodeRoot=/tubemq
zkServerAddr=localhost:2181 // multi zookeeper addresses can separate with ","
```

### 1.5 Start Master
### Start Master
- Please go to the `bin` folder and run this command to start
the master service.
```bash
Expand All @@ -116,7 +113,7 @@ the master service.
web GUI now.


#### 1.5.1 Configure Broker Metadata
#### Configure Broker Metadata
Before we start a broker service, we need to configure it on master web GUI first. Go to the `Broker List` page, click `Add Single Broker`, and input the new broker information.
In this example, we only need to input broker IP and authToken:
1. broker IP: broker server ip
Expand All @@ -126,7 +123,7 @@ In this example, we only need to input broker IP and authToken:
Click the online link to activate the new added broker.


### 1.6 Start Broker
### Start Broker
- Please go to the `bin` folder and run this command to start the broker service
```bash
./tubemq.sh broker start
Expand All @@ -137,8 +134,8 @@ Click the online link to activate the new added broker.
- After the sub-state of the broker changed to `idle`, we can add topics to that broker.


## 2 Quick Start
### 3.1 Add Topic
## Quick Start
### Add Topic
- 3.1.1 We can add or manage the cluster topics on the web GUI. To add a new topic, go to the
topic list page and click the add new topic button

Expand All @@ -152,10 +149,10 @@ that the topic publish/subscribe state is active now.

- 3.1.5 Now we can use the topic to send messages.

### 2.2 Run Example
### Run Example
Now we can use `demo` topic which created before to test our cluster.

#### 2.2.1 Produce Messages
#### Produce Messages

Please don't forget replace `YOUR_MASTER_IP:port` with your server ip and port, and start producer.

Expand All @@ -167,7 +164,7 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
From the log, we can see the message is sent out.
![Demo 1](img/tubemq-send-message.png)

#### 2.2.2 Consume Messages
#### Consume Messages

Please don't forget replace YOUR_MASTER_IP:port with your server ip and port, and start consumer.
```bash
Expand All @@ -178,10 +175,10 @@ cd /INSTALL_PATH/apache-inlong-tubemq-server-[TUBEMQ-VERSION]-bin
From the log, we can see the message received by the consumer.
![Demo 2](img/tubemq-consume-message.png)

## 3 Deploy Manager
## Deploy Manager
You can refer to [InLong TubeMQ Manager](modules/tubemq/tubemq-manager/quick_start.md)

## 4 The End
## The End
Here, the compilation, deployment, system configuration, startup, production and consumption of TubeMQ have been completed. If you need to understand more in-depth content, please check the relevant content in "TubeMQ HTTP API" and make the corresponding configuration settings.

---
Expand Down
Binary file not shown.
Loading
Loading