diff --git a/.nojekyll b/.nojekyll new file mode 100644 index 00000000..e69de29b diff --git a/404.html b/404.html new file mode 100644 index 00000000..e680ce09 --- /dev/null +++ b/404.html @@ -0,0 +1,613 @@ + + + +
+ + + + + + + + + + + + + + + +API reference in the Swagger UI can be found at: https://api.mainflux.io
+To start working with the Mainflux system, you need to create a user account.
+++Identity, which can be email-address (this must be unique as it identifies the user) and secret (password must contain at least 8 characters).
+
curl -sSiX POST http://localhost/users -H "Content-Type: application/json" [-H "Authorization: Bearer <user_token>"] -d @- << EOF
+{
+ "name": "[name]",
+ "tags": ["[tag1]", "[tag2]"],
+ "credentials": {
+ "identity": "<user_identity>",
+ "secret": "<user_secret>"
+ },
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ },
+ "status": "[status]",
+ "role": "[role]"
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/users -H "Content-Type: application/json" -d @- << EOF
+{
+ "name": "John Doe",
+ "credentials": {
+ "identity": "john.doe@email.com",
+ "secret": "12345678"
+ }
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:45:38 GMT
+Content-Type: application/json
+Content-Length: 223
+Connection: keep-alive
+Location: /users/4f22fa45-50ca-491b-a7c9-680a2608dc13
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "4f22fa45-50ca-491b-a7c9-680a2608dc13",
+ "name": "John Doe",
+ "credentials": { "identity": "john.doe@email.com" },
+ "created_at": "2023-06-14T13:45:38.808423Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+You can also use <user_token>
so that the owner of the new user is the one identified by the <user_token>
for example:
curl -sSiX POST http://localhost/users -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name": "John Doe",
+ "credentials": {
+ "identity": "jane.doe@email.com",
+ "secret": "12345678"
+ },
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:46:47 GMT
+Content-Type: application/json
+Content-Length: 252
+Connection: keep-alive
+Location: /users/1890c034-7ef9-4cde-83df-d78ea1d4d281
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "identity": "jane.doe@email.com" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+To log in to the Mainflux system, you need to create a user_token
.
curl -sSiX POST http://localhost/users/tokens/issue -H "Content-Type: application/json" -d @- << EOF
+{
+ "identity": "<user_identity>",
+ "secret": "<user_secret>"
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/users/tokens/issue -H "Content-Type: application/json" -d @- << EOF
+{
+ "identity": "john.doe@email.com",
+ "secret": "12345678"
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:47:32 GMT
+Content-Type: application/json
+Content-Length: 709
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY3NTEzNTIsImlhdCI6MTY4Njc1MDQ1MiwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoiYWNjZXNzIn0.AND1sm6mN2wgUxVkDhpipCoNa87KPMghGaS5-4dU0iZaqGIUhWScrEJwOahT9ts1TZSd1qEcANTIffJ_y2Pbsg",
+ "refresh_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY4MzY4NTIsImlhdCI6MTY4Njc1MDQ1MiwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoicmVmcmVzaCJ9.z3OWCHhNHNuvkzBqEAoLKWS6vpFLkIYXhH9cZogSCXd109-BbKVlLvYKmja-hkhaj_XDJKySDN3voiazBr_WTA",
+ "access_type": "Bearer"
+}
+
+To issue another access_token
after getting expired, you need to use a refresh_token
.
curl -sSiX POST http://localhost/users/tokens/refresh -H "Content-Type: application/json" -H "Authorization: Bearer <refresh_token>"
+
+For example:
+curl -sSiX POST http://localhost/users/tokens/refresh -H "Content-Type: application/json" -H "Authorization: Bearer <refresh_token>"
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:49:45 GMT
+Content-Type: application/json
+Content-Length: 709
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY3NTE0ODUsImlhdCI6MTY4Njc1MDU4NSwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoiYWNjZXNzIn0.zZcUH12x7Tlnecrc3AAFnu3xbW4wAOGifWZMnba2EnhosHWDuSN4N7s2S7OxPOrBGAG_daKvkA65mi5n1sxi9A",
+ "refresh_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY4MzY5ODUsImlhdCI6MTY4Njc1MDU4NSwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoicmVmcmVzaCJ9.AjxJ5xlUUSjW99ECUAU19ONeCs8WlRl52Ost2qGTADxHGYBjPMqctruyoTYJbdORtL5f2RTxZsnLX_1vLKRY2A",
+ "access_type": "Bearer"
+}
+
+You can always check the user profile that is logged-in by using the user_token
.
curl -sSiX GET http://localhost/users/profile -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/users/profile -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:51:59 GMT
+Content-Type: application/json
+Content-Length: 312
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": {
+ "identity": "jane.doe@email.com"
+ },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+You can always check the user entity by entering the user ID and user_token
.
curl -sSiX GET http://localhost/users/<user_id> -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281 -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:51:59 GMT
+Content-Type: application/json
+Content-Length: 312
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": {
+ "identity": "jane.doe@email.com"
+ },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+You can get all users in the database by querying /users
endpoint.
curl -sSiX GET http://localhost/users -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/users -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:52:36 GMT
+Content-Type: application/json
+Content-Length: 285
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 10,
+ "total": 1,
+ "users": [
+ {
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "identity": "jane.doe@email.com" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+If you want to paginate your results then use offset
, limit
, metadata
, name
, identity
, tag
, status
and visbility
as query parameters.
curl -sSiX GET http://localhost/users?[offset=<offset>]&[limit=<limit>]&[identity=<identity>]&[name=<name>]&[tag=<tag>]&[status=<status>]&[visibility=<visibility>] -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/users?offset=0&limit=5&identity=jane.doe@email.com -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:53:16 GMT
+Content-Type: application/json
+Content-Length: 284
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 5,
+ "total": 1,
+ "users": [
+ {
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "identity": "jane.doe@email.com" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+Updating user's name and/or metadata
+curl -sSiX PATCH http://localhost/users/<user_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name": "[new_name]",
+ "metadata": {
+ "[key]": "[value]",
+ }
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281 -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name": "Jane Doe",
+ "metadata": {
+ "location": "london",
+ }
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:54:40 GMT
+Content-Type: application/json
+Content-Length: 354
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "name": "Jane Doe",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "identity": "jane.doe@email.com" },
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "2023-06-14T13:54:40.208005Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Updating user's tags
+curl -sSiX PATCH http://localhost/users/<user_id>/tags -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "tags": [
+ "[tag_1]",
+ ...
+ "[tag_N]"
+ ]
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/tags -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "tags": ["male", "developer"]
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:55:18 GMT
+Content-Type: application/json
+Content-Length: 375
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "name": "Jane Doe",
+ "tags": ["male", "developer"],
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "identity": "jane.doe@email.com" },
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "2023-06-14T13:55:18.353027Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Updating user's owner
+curl -sSiX PATCH http://localhost/users/<user_id>/owner -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "owner": "<owner_id>"
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/owner -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "owner": "532311a4-c13b-4061-b991-98dcae7a934e"
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:56:32 GMT
+Content-Type: application/json
+Content-Length: 375
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "name": "Jane Doe",
+ "tags": ["male", "developer"],
+ "owner": "532311a4-c13b-4061-b991-98dcae7a934e",
+ "credentials": { "identity": "jane.doe@email.com" },
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "2023-06-14T13:56:32.059484Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Updating user's identity
+curl -sSiX PATCH http://localhost/users/<user_id>/identity -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "identity": "<user_identity>"
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/identity -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "identity": "updated.jane.doe@gmail.com"
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:59:53 GMT
+Content-Type: application/json
+Content-Length: 382
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "name": "Jane Doe",
+ "tags": ["male", "developer"],
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "identity": "updated.jane.doe@gmail.com" },
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "2023-06-14T13:59:53.422595Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Changing the user secret can be done by calling the update secret method
+curl -sSiX PATCH http://localhost/users/secret -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "old_secret": "<old_secret>",
+ "new_secret": "<new_secret>"
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/users/secret -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "old_secret": "12345678",
+ "new_secret": "12345678a"
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 14:00:35 GMT
+Content-Type: application/json
+Content-Length: 281
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+Changing the user status to enabled can be done by calling the enable user method
+curl -sSiX POST http://localhost/users/<user_id>/enable -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX POST http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/enable -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 14:01:25 GMT
+Content-Type: application/json
+Content-Length: 382
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "name": "Jane Doe",
+ "tags": ["male", "developer"],
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "identity": "updated.jane.doe@gmail.com" },
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "2023-06-14T13:59:53.422595Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Changing the user status to disabled can be done by calling the disable user method
+curl -sSiX POST http://localhost/users/<user_id>/disable -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX POST http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/disable -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 14:01:23 GMT
+Content-Type: application/json
+Content-Length: 383
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "name": "Jane Doe",
+ "tags": ["male", "developer"],
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "identity": "updated.jane.doe@gmail.com" },
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "2023-06-14T13:59:53.422595Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "disabled"
+}
+
+You can get all groups a user is assigned to by calling the get user memberships method.
+If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
++The user identified by the
+user_token
must be assigned to the same group as the user with iduser_id
withc_list
action. Alternatively, the user identified by theuser_token
must be the owner of the user with iduser_id
.
curl -sSiX GET http://localhost/users/<user_id>/memberships -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/memberships -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 11:22:18 GMT
+Content-Type: application/json
+Content-Length: 367
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 0,
+ "offset": 0,
+ "memberships": [
+ {
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Data analysts",
+ "description": "This group would be responsible for analyzing data collected from sensors.",
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "2023-06-15T10:17:56.475241Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+ }
+ ]
+}
+
+To create a thing, you need the thing and a user_token
curl -sSiX POST http://localhost/things -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "id": "[thing_id]",
+ "name":"[thing_name]",
+ "tags": ["[tag1]", "[tag2]"],
+ "credentials": {
+ "identity": "[thing-identity]",
+ "secret":"[thing-secret]"
+ },
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ },
+ "status": "[enabled|disabled]"
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/things -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"Temperature Sensor"
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:04:04 GMT
+Content-Type: application/json
+Content-Length: 280
+Connection: keep-alive
+Location: /things/48101ecd-1535-40c6-9ed8-5b1d21e371bb
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Temperature Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "c3f8c096-c60f-4375-8494-bca20a12fca7" },
+ "created_at": "2023-06-15T09:04:04.292602664Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+It is often the case that the user will want to integrate the existing solutions, e.g. an asset management system, with the Mainflux platform. To simplify the integration between the systems and avoid artificial cross-platform reference, such as special fields in Mainflux Things metadata, it is possible to set Mainflux Thing ID with an existing unique ID while create the Thing. This way, the user can set the existing ID as the Thing ID of a newly created Thing to keep reference between Thing and the asset that Thing represents.
+The limitation is that the existing ID has to be unique in the Mainflux domain.
+To create a thing with an external ID, you need to provide the ID together with thing name, and other fields as well as a user_token
For example:
+curl -sSiX POST http://localhost/things -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "name":"Temperature Sensor"
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:05:06 GMT
+Content-Type: application/json
+Content-Length: 280
+Connection: keep-alive
+Location: /things/2766ae94-9a08-4418-82ce-3b91cf2ccd3e
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "name": "Temperature Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "65ca03bd-eb6b-420b-9d5d-46d459d4f71c" },
+ "created_at": "2023-06-15T09:05:06.538170496Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+It is often the case that the user will want to integrate the existing solutions, e.g. an asset management system, with the Mainflux platform. To simplify the integration between the systems and avoid artificial cross-platform reference, such as special fields in Mainflux Things metadata, it is possible to set Mainflux Thing secret with an existing unique secret when creating the Thing. This way, the user can set the existing secret as the Thing secret of a newly created Thing to keep reference between Thing and the asset that Thing represents. +The limitation is that the existing secret has to be unique in the Mainflux domain.
+To create a thing with an external secret, you need to provide the secret together with thing name, and other fields as well as a user_token
For example:
+curl -sSiX POST http://localhost/things -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"Temperature Sensor"
+ "credentials": {
+ "secret": "94939159-9a08-4f17-9e4e-3b91cf2ccd3e"
+ }
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:05:06 GMT
+Content-Type: application/json
+Content-Length: 280
+Connection: keep-alive
+Location: /things/2766ae94-9a08-4418-82ce-3b91cf2ccd3e
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "name": "Temperature Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "94939159-9a08-4f17-9e4e-3b91cf2ccd3e" },
+ "created_at": "2023-06-15T09:05:06.538170496Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+You can create multiple things at once by entering a series of things structures and a user_token
curl -sSiX POST http://localhost/things/bulk -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+[
+ {
+ "id": "[thing_id]",
+ "name":"[thing_name]",
+ "tags": ["[tag1]", "[tag2]"],
+ "credentials": {
+ "identity": "[thing-identity]",
+ "secret":"[thing-secret]"
+ },
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ },
+ "status": "[enabled|disabled]"
+ },
+ {
+ "id": "[thing_id]",
+ "name":"[thing_name]",
+ "tags": ["[tag1]", "[tag2]"],
+ "credentials": {
+ "identity": "[thing-identity]",
+ "secret":"[thing-secret]"
+ },
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ },
+ "status": "[enabled|disabled]"
+ }
+]
+EOF
+
+For example:
+curl -sSiX POST http://localhost/things/bulk -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+[
+ {
+ "name":"Motion Sensor"
+ },
+ {
+ "name":"Light Sensor"
+ }
+]
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:05:45 GMT
+Content-Type: application/json
+Content-Length: 583
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "total": 2,
+ "things": [
+ {
+ "id": "19f59b2d-1e9c-43db-bc84-5432bd52a83f",
+ "name": "Motion Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "941c380a-3a41-40e9-8b79-3087daa4f3a6" },
+ "created_at": "2023-06-15T09:05:45.719182307Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "3709f2b0-9c73-413f-992e-7f6f9b396b0d",
+ "name": "Light Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "798ee6be-311b-4640-99e4-0ccb19e0dcb9" },
+ "created_at": "2023-06-15T09:05:45.719186184Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+The same as creating a Thing with external ID the user can create multiple things at once by providing UUID v4 format unique ID in a series of things together with a user_token
For example:
+curl -sSiX POST http://localhost/things/bulk -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+[
+ {
+ "id": "eb2670ba-a2be-4ea4-83cb-111111111111",
+ "name":"Motion Sensor"
+ },
+ {
+ "id": "eb2670ba-a2be-4ea4-83cb-111111111112",
+ "name":"Light Sensor"
+ }
+]
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:06:17 GMT
+Content-Type: application/json
+Content-Length: 583
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "total": 2,
+ "things": [
+ {
+ "id": "eb2670ba-a2be-4ea4-83cb-111111111111",
+ "name": "Motion Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "325cda17-3a52-465d-89a7-2b63c7d0e3a6" },
+ "created_at": "2023-06-15T09:06:17.967825372Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "eb2670ba-a2be-4ea4-83cb-111111111112",
+ "name": "Light Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "67b6cbb8-4a9e-4d32-8b9c-d7cd3352aa2b" },
+ "created_at": "2023-06-15T09:06:17.967828689Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+You can get thing entity by entering the thing ID and user_token
curl -sSiX GET http://localhost/things/<thing_id> -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:07:30 GMT
+Content-Type: application/json
+Content-Length: 277
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Temperature Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "c3f8c096-c60f-4375-8494-bca20a12fca7" },
+ "created_at": "2023-06-15T09:04:04.292602Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+You can get all things in the database by querying /things
endpoint.
curl -sSiX GET http://localhost/things -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/things -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:07:59 GMT
+Content-Type: application/json
+Transfer-Encoding: chunked
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 10,
+ "total": 8,
+ "things": [
+ {
+ "id": "f3047c10-f2c7-4d53-b3c0-bc56c560c546",
+ "name": "Humidity Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "6d11a91f-0bd8-41aa-8e1b-4c6338329c9c" },
+ "created_at": "2023-06-14T12:04:12.740098Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "04b0b2d1-fdaf-4b66-96a0-740a3151db4c",
+ "name": "UV Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "a1e5d77f-8903-4cef-87b1-d793a3c28de3" },
+ "created_at": "2023-06-14T12:04:56.245743Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Temperature Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "c3f8c096-c60f-4375-8494-bca20a12fca7" },
+ "created_at": "2023-06-15T09:04:04.292602Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "name": "Temperature Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "65ca03bd-eb6b-420b-9d5d-46d459d4f71c" },
+ "created_at": "2023-06-15T09:05:06.53817Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "19f59b2d-1e9c-43db-bc84-5432bd52a83f",
+ "name": "Motion Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "941c380a-3a41-40e9-8b79-3087daa4f3a6" },
+ "created_at": "2023-06-15T09:05:45.719182Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "3709f2b0-9c73-413f-992e-7f6f9b396b0d",
+ "name": "Light Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "798ee6be-311b-4640-99e4-0ccb19e0dcb9" },
+ "created_at": "2023-06-15T09:05:45.719186Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "eb2670ba-a2be-4ea4-83cb-111111111111",
+ "name": "Motion Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "325cda17-3a52-465d-89a7-2b63c7d0e3a6" },
+ "created_at": "2023-06-15T09:06:17.967825Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "eb2670ba-a2be-4ea4-83cb-111111111112",
+ "name": "Light Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "67b6cbb8-4a9e-4d32-8b9c-d7cd3352aa2b" },
+ "created_at": "2023-06-15T09:06:17.967828Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, tags
and visibility
as query parameters.
curl -sSiX GET http://localhost/things?[offset=<offset>]&[limit=<limit>]&name=[name]&[status=<status>] -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/things?offset=1&limit=5&name=Light Sensor -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:08:39 GMT
+Content-Type: application/json
+Content-Length: 321
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 5,
+ "offset": 1,
+ "total": 2,
+ "things": [
+ {
+ "id": "eb2670ba-a2be-4ea4-83cb-111111111112",
+ "name": "Light Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "67b6cbb8-4a9e-4d32-8b9c-d7cd3352aa2b" },
+ "created_at": "2023-06-15T09:06:17.967828Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+Updating a thing name and/or metadata
+curl -sSiX PATCH http://localhost/things/<thing_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"[thing_name]",
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ }
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"Pressure Sensor"
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:09:12 GMT
+Content-Type: application/json
+Content-Length: 332
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Pressure Sensor",
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "c3f8c096-c60f-4375-8494-bca20a12fca7" },
+ "created_at": "2023-06-15T09:04:04.292602Z",
+ "updated_at": "2023-06-15T09:09:12.267074Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Updating a thing tags
+curl -sSiX PATCH http://localhost/things/<thing_id>/tags -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "tags": ["tag_1", ..., "tag_N"]
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/tags -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "tags": ["sensor", "smart"]
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:09:44 GMT
+Content-Type: application/json
+Content-Length: 347
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Pressure Sensor",
+ "tags": ["sensor", "smart"],
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "c3f8c096-c60f-4375-8494-bca20a12fca7" },
+ "created_at": "2023-06-15T09:04:04.292602Z",
+ "updated_at": "2023-06-15T09:09:44.766726Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Updating a thing entity
+curl -sSiX PATCH http://localhost/things/<thing_id>/owner -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "owner": "[owner_id]"
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/owner -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "owner": "f7c55a1f-dde8-4880-9796-b3a0cd05745b"
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:09:44 GMT
+Content-Type: application/json
+Content-Length: 347
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Pressure Sensor",
+ "tags": ["sensor", "smart"],
+ "owner": "f7c55a1f-dde8-4880-9796-b3a0cd05745b",
+ "credentials": { "secret": "c3f8c096-c60f-4375-8494-bca20a12fca7" },
+ "created_at": "2023-06-15T09:04:04.292602Z",
+ "updated_at": "2023-06-15T09:09:44.766726Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Updating a thing secret
+curl -sSiX PATCH http://localhost/things/<thing_id>/secret -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "secret": "<thing_secret>"
+}
+EOF
+
+For example:
+curl -sSiX PATCH http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/secret -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "secret": "94939159-9a08-4f17-9e4e-3b91cf2ccd3e"
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:10:52 GMT
+Content-Type: application/json
+Content-Length: 321
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Pressure Sensor",
+ "tags": ["sensor", "smart"],
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "94939159-9a08-4f17-9e4e-3b91cf2ccd3e" },
+ "created_at": "2023-06-15T09:04:04.292602Z",
+ "updated_at": "2023-06-15T09:10:52.051497Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+To enable a thing you need a thing_id
and a user_token
curl -sSiX POST http://localhost/things/<thing_id>/enable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX POST http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/enable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:11:43 GMT
+Content-Type: application/json
+Content-Length: 321
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Pressure Sensor",
+ "tags": ["sensor", "smart"],
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "94939159-9a08-4f17-9e4e-3b91cf2ccd3e" },
+ "created_at": "2023-06-15T09:04:04.292602Z",
+ "updated_at": "2023-06-15T09:10:52.051497Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+To disable a thing you need a thing_id
and a user_token
curl -sSiX POST http://localhost/things/<thing_id>/disable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX POST http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/disable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:11:38 GMT
+Content-Type: application/json
+Content-Length: 322
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Pressure Sensor",
+ "tags": ["sensor", "smart"],
+ "owner": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "credentials": { "secret": "94939159-9a08-4f17-9e4e-3b91cf2ccd3e" },
+ "created_at": "2023-06-15T09:04:04.292602Z",
+ "updated_at": "2023-06-15T09:10:52.051497Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "disabled"
+}
+
+To create a channel, you need a user_token
curl -sSiX POST http://localhost/channels -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "id": "[channel_id]",
+ "name":"[channel_name]",
+ "description":"[channel_description]",
+ "owner_id": "[owner_id]",
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ },
+ "status": "[enabled|disabled]"
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/channels -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name": "Temperature Data"
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:12:51 GMT
+Content-Type: application/json
+Content-Length: 218
+Connection: keep-alive
+Location: /channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Temperature Data",
+ "created_at": "2023-06-15T09:12:51.162431Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+Channel is a group of things that could represent a special category in existing systems, e.g. a building level channel could represent the level of a smarting building system. For helping to keep the reference, it is possible to set an existing ID while creating the Mainflux channel. There are two limitations - the existing ID has to be in UUID V4 format and it has to be unique in the Mainflux domain.
+To create a channel with external ID, the user needs to provide a UUID v4 format unique ID, and a user_token
For example:
+curl -sSiX POST http://localhost/channels -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "name": "Humidity Data"
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:15:11 GMT
+Content-Type: application/json
+Content-Length: 219
+Connection: keep-alive
+Location: /channels/48101ecd-1535-40c6-9ed8-5b1d21e371bb
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Humidity Data",
+ "created_at": "2023-06-15T09:15:11.477695Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+The same as creating a channel with external ID the user can create multiple channels at once by providing UUID v4 format unique ID in a series of channels together with a user_token
curl -sSiX POST http://localhost/channels/bulk -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+[
+ {
+ "id": "[channel_id]",
+ "name":"[channel_name]",
+ "description":"[channel_description]",
+ "owner_id": "[owner_id]",
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ },
+ "status": "[enabled|disabled]"
+ },
+ {
+ "id": "[channel_id]",
+ "name":"[channel_name]",
+ "description":"[channel_description]",
+ "owner_id": "[owner_id]",
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ },
+ "status": "[enabled|disabled]"
+ }
+]
+EOF
+
+For example:
+curl -sSiX POST http://localhost/channels/bulk -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+[
+ {
+ "name":"Light Data"
+ },
+ {
+ "name":"Pressure Data"
+ }
+]
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:15:44 GMT
+Content-Type: application/json
+Content-Length: 450
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "channels": [
+ {
+ "id": "cb81bbff-850d-471f-bd74-c15d6e1a6c4e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Light Data",
+ "created_at": "2023-06-15T09:15:44.154283Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "fc9bf029-b1d3-4408-8d53-fc576247a4b3",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Pressure Data",
+ "created_at": "2023-06-15T09:15:44.15721Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+As with things, you can create multiple channels with external ID at once
+For example:
+curl -sSiX POST http://localhost/channels/bulk -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+[
+ {
+ "id": "977bbd33-5b59-4b7a-a9c3-111111111111",
+ "name":"Light Data"
+ },
+ {
+ "id": "977bbd33-5b59-4b7a-a9c3-111111111112",
+ "name":"Pressure Data"
+ }
+]
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:16:16 GMT
+Content-Type: application/json
+Content-Length: 453
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "channels": [
+ {
+ "id": "977bbd33-5b59-4b7a-a9c3-111111111111",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Light Data",
+ "created_at": "2023-06-15T09:16:16.931016Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "977bbd33-5b59-4b7a-a9c3-111111111112",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Pressure Data",
+ "created_at": "2023-06-15T09:16:16.934486Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+Get a channel entity for a logged-in user
+curl -sSiX GET http://localhost/channels/<channel_id> -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8 -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:17:17 GMT
+Content-Type: application/json
+Content-Length: 218
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Temperature Data",
+ "created_at": "2023-06-15T09:12:51.162431Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+You can get all channels for a logged-in user.
+If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
curl -sSiX GET http://localhost/channels -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/channels -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:17:46 GMT
+Content-Type: application/json
+Content-Length: 1754
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "total": 8,
+ "channels": [
+ {
+ "id": "17129934-4f48-4163-bffe-0b7b532edc5c",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Tokyo",
+ "created_at": "2023-06-14T12:10:07.950311Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Humidity Data",
+ "created_at": "2023-06-15T09:15:11.477695Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "977bbd33-5b59-4b7a-a9c3-111111111111",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Light Data",
+ "created_at": "2023-06-15T09:16:16.931016Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "977bbd33-5b59-4b7a-a9c3-111111111112",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Pressure Data",
+ "created_at": "2023-06-15T09:16:16.934486Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Temperature Data",
+ "created_at": "2023-06-15T09:12:51.162431Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "b3867a52-675d-4f05-8cd0-df5a08a63ff3",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "London",
+ "created_at": "2023-06-14T12:09:34.205894Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "cb81bbff-850d-471f-bd74-c15d6e1a6c4e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Light Data",
+ "created_at": "2023-06-15T09:15:44.154283Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "fc9bf029-b1d3-4408-8d53-fc576247a4b3",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Pressure Data",
+ "created_at": "2023-06-15T09:15:44.15721Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+Update channel name and/or metadata.
+curl -sSiX PUT http://localhost/channels/<channel_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"[channel_name]",
+ "description":"[channel_description]",
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ }
+}
+EOF
+
+For example:
+curl -sSiX PUT http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8 -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"Jane Doe",
+ "metadata": {
+ "location": "london"
+ }
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:18:26 GMT
+Content-Type: application/json
+Content-Length: 296
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Jane Doe",
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-15T09:12:51.162431Z",
+ "updated_at": "2023-06-15T09:18:26.886913Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+To enable a channel you need a channel_id
and a user_token
curl -sSiX POST http://localhost/channels/<channel_id>/enable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX POST http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/enable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:19:29 GMT
+Content-Type: application/json
+Content-Length: 296
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Jane Doe",
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-15T09:12:51.162431Z",
+ "updated_at": "2023-06-15T09:18:26.886913Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+To disable a channel you need a channel_id
and a user_token
curl -sSiX POST http://localhost/channels/<channel_id>/disable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX POST http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/disable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:19:24 GMT
+Content-Type: application/json
+Content-Length: 297
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Jane Doe",
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-15T09:12:51.162431Z",
+ "updated_at": "2023-06-15T09:18:26.886913Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "disabled"
+}
+
+Connect things to channels
++++
actions
is optional, if not provided, the default action ism_read
andm_write
.
curl -sSiX POST http://localhost/connect -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subjects": ["<thing_id>"],
+ "objects": ["<channel_id>"],
+ "actions": ["[action]"]
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/connect -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subjects": ["48101ecd-1535-40c6-9ed8-5b1d21e371bb"],
+ "objects": ["aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8"]
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:21:37 GMT
+Content-Type: application/json
+Content-Length: 247
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "policies": [
+ {
+ "owner_id": "",
+ "subject": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "object": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "actions": ["m_write", "m_read"],
+ "created_at": "0001-01-01T00:00:00Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "updated_by": ""
+ }
+ ]
+}
+
+Connect thing to channel
++++
actions
is optional, if not provided, the default actions arem_read
andm_write
.
curl -sSiX POST http://localhost/things/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "<thing_id>",
+ "object": "<channel_id>",
+ "actions": ["<action>", "[action]"]]
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/things/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "object": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8"
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:23:28 GMT
+Content-Type: application/json
+Content-Length: 290
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "policies": [
+ {
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "subject": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "object": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "actions": ["m_write", "m_read"],
+ "created_at": "2023-06-15T09:23:28.769729Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "updated_by": ""
+ }
+ ]
+}
+
+Disconnect things from channels specified by lists of IDs.
+curl -sSiX POST http://localhost/disconnect -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subjects": ["<thing_id_1>", "[thing_id_2]"],
+ "objects": ["<channel_id_1>", "[channel_id_2]"]
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/disconnect -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subjects": ["48101ecd-1535-40c6-9ed8-5b1d21e371bb"],
+ "objects": ["aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8"]
+}
+EOF
+
+HTTP/1.1 204 No Content
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:23:07 GMT
+Content-Type: application/json
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+Disconnect thing from the channel
+curl -sSiX DELETE http://localhost/things/policies/<subject_id>/<object_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX DELETE http://localhost/things/policies/48101ecd-1535-40c6-9ed8-5b1d21e371bb/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8 -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 204 No Content
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:25:23 GMT
+Content-Type: application/json
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+Checks if thing has access to a channel
+curl -sSiX POST http://localhost/channels/<channel_id>/access -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "<thing_secret>",
+ "action": "m_read" | "m_write",
+ "entity_type": "thing"
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/access -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "48101ecd-1535-40c6-9ed8-5b1d21e371bb",
+ "action": "m_read",
+ "entity_type": "thing"
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:39:26 GMT
+Content-Type: application/json
+Content-Length: 0
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+Validates thing's key and returns it's ID if key is valid
+curl -sSiX POST http://localhost/identify -H "Content-Type: application/json" -H "Authorization: Thing <thing_secret>"
+
+For example:
+curl -sSiX POST http://localhost/identify -H "Content-Type: application/json" -H "Authorization: Thing 6d11a91f-0bd8-41aa-8e1b-4c6338329c9c"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:28:16 GMT
+Content-Type: application/json
+Content-Length: 46
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{ "id": "f3047c10-f2c7-4d53-b3c0-bc56c560c546" }
+
+Sends message via HTTP protocol
+curl -sSiX POST http://localhost/http/channels/<channel_id>/messages -H "Content-Type: application/senml+json" -H "Authorization: Thing <thing_secret>" -d @- << EOF
+[
+ {
+ "bn": "<base_name>",
+ "bt": "[base_time]",
+ "bu": "[base_unit]",
+ "bver": [base_version],
+ "n": "<measurement_name>",
+ "u": "<measurement_unit>",
+ "v": <measurement_value>,
+ },
+ {
+ "n": "[measurement_name]",
+ "t": <measurement_time>,
+ "v": <measurement_value>,
+ }
+]
+EOF
+
+For example:
+curl -sSiX POST http://localhost/http/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/messages -H "Content-Type: application/senml+json" -H "Authorization: Thing a83b9afb-9022-4f9e-ba3d-4354a08c273a" -d @- << EOF
+[
+ {
+ "bn": "some-base-name:",
+ "bt": 1.276020076001e+09,
+ "bu": "A",
+ "bver": 5,
+ "n": "voltage",
+ "u": "V",
+ "v": 120.1
+ },
+ {
+ "n": "current",
+ "t": -5,
+ "v": 1.2
+ },
+ {
+ "n": "current",
+ "t": -4,
+ "v": 1.3
+ }
+]
+EOF
+HTTP/1.1 202 Accepted
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:40:44 GMT
+Content-Length: 0
+Connection: keep-alive
+
+Reads messages from database for a given channel
+curl -sSiX GET http://localhost:<service_port>/channels/<channel_id>/messages?[offset=<offset>]&[limit=<limit>] -H "Authorization: Thing <thing_secret>"
+
+For example:
+curl -sSiX GET http://localhost:9009/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/messages -H "Authorization: Thing a83b9afb-9022-4f9e-ba3d-4354a08c273a"
+
+HTTP/1.1 200 OK
+Content-Type: application/json
+Date: Wed, 05 Apr 2023 16:01:49 GMT
+Content-Length: 660
+
+{
+ "offset": 0,
+ "limit": 10,
+ "format": "messages",
+ "total": 3,
+ "messages": [{
+ "channel": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "publisher": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "protocol": "http",
+ "name": "some-base-name:voltage",
+ "unit": "V",
+ "time": 1276020076.001,
+ "value": 120.1
+ },
+ {
+ "channel": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "publisher": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "protocol": "http",
+ "name": "some-base-name:current",
+ "unit": "A",
+ "time": 1276020072.001,
+ "value": 1.3
+ },
+ {
+ "channel": "aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8",
+ "publisher": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "protocol": "http",
+ "name": "some-base-name:current",
+ "unit": "A",
+ "time": 1276020071.001,
+ "value": 1.2
+ }
+ ]
+}
+
+To create a group, you need the group name and a user_token
curl -sSiX POST http://localhost/groups -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"<group_name>",
+ "description":"[group_description]",
+ "parent_id": "[parent_id]",
+ "owner_id": "[owner_id]",
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ },
+ "status": "[enabled|disabled]"
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/groups -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name": "Security Engineers",
+ "description": "This group would be responsible for securing the platform."
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:41:42 GMT
+Content-Type: application/json
+Content-Length: 252
+Connection: keep-alive
+Location: /groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Security Engineers",
+ "description": "This group would be responsible for securing the platform.",
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+When you use parent_id
make sure the parent is an already exisiting group
For example:
+curl -sSiX POST http://localhost/groups -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name": "Customer Support",
+ "description": "This group would be responsible for providing support to users of the platform.",
+ "parent_id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e"
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 09:42:34 GMT
+Content-Type: application/json
+Content-Length: 306
+Connection: keep-alive
+Location: /groups/dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "parent_id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "name": "Customer Support",
+ "description": "This group would be responsible for providing support to users of the platform.",
+ "created_at": "2023-06-15T09:42:34.063997Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+Get a group entity for a logged-in user
+curl -sSiX GET http://localhost/groups/<group_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 10:00:52 GMT
+Content-Type: application/json
+Content-Length: 252
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Security Engineers",
+ "description": "This group would be responsible for securing the platform.",
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+}
+
+You can get all groups for a logged-in user.
+If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
curl -sSiX GET http://localhost/groups -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/groups -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 10:13:50 GMT
+Content-Type: application/json
+Content-Length: 807
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 0,
+ "offset": 0,
+ "total": 3,
+ "groups": [
+ {
+ "id": "0a4a2c33-2d0e-43df-b51c-d905aba99e17",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Sensor Operators",
+ "created_at": "2023-06-14T13:33:52.249784Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Security Engineers",
+ "description": "This group would be responsible for securing the platform.",
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "parent_id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "name": "Customer Support",
+ "description": "This group would be responsible for providing support to users of the platform.",
+ "created_at": "2023-06-15T09:42:34.063997Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+You can get all groups that are parents of a group for a logged-in user.
+If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
curl -sSiX GET http://localhost/groups/<group_id>/parents -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/groups/dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a/parents?tree=true -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 10:16:03 GMT
+Content-Type: application/json
+Content-Length: 627
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 10,
+ "offset": 0,
+ "total": 3,
+ "groups": [
+ {
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Security Engineers",
+ "description": "This group would be responsible for securing the platform.",
+ "level": -1,
+ "children": [
+ {
+ "id": "dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "parent_id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "name": "Customer Support",
+ "description": "This group would be responsible for providing support to users of the platform.",
+ "created_at": "2023-06-15T09:42:34.063997Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ],
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+You can get all groups that are children of a group for a logged-in user.
+If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
curl -sSiX GET http://localhost/groups/<group_id>/children -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e/children?tree=true -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 10:17:13 GMT
+Content-Type: application/json
+Content-Length: 755
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 10,
+ "offset": 0,
+ "total": 3,
+ "groups": [
+ {
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Security Engineers",
+ "description": "This group would be responsible for securing the platform.",
+ "path": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "children": [
+ {
+ "id": "dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "parent_id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "name": "Customer Support",
+ "description": "This group would be responsible for providing support to users of the platform.",
+ "level": 1,
+ "path": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e.dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a",
+ "created_at": "2023-06-15T09:42:34.063997Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ],
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+Update group entity
+curl -sSiX PUT http://localhost/groups/<group_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"[group_name]",
+ "description":"[group_description]",
+ "metadata": {
+ "[key1]": "[value1]",
+ "[key2]": "[value2]"
+ }
+}
+EOF
+
+For example:
+curl -sSiX PUT http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "name":"Data Analysts",
+ "description":"This group would be responsible for analyzing data collected from sensors.",
+ "metadata": {
+ "location": "london"
+ }
+}
+EOF
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 10:17:56 GMT
+Content-Type: application/json
+Content-Length: 328
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Data Analysts",
+ "description": "This group would be responsible for analyzing data collected from sensors.",
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "2023-06-15T10:17:56.475241Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Disable a group entity
+curl -sSiX POST http://localhost/groups/<group_id>/disable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX POST http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e/disable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 10:18:28 GMT
+Content-Type: application/json
+Content-Length: 329
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Data Analysts",
+ "description": "This group would be responsible for analyzing data collected from sensors.",
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "2023-06-15T10:17:56.475241Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "disabled"
+}
+
+Enable a group entity
+curl -sSiX POST http://localhost/groups/<group_id>/enable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX POST http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e/enable -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 10:18:55 GMT
+Content-Type: application/json
+Content-Length: 328
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "id": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "name": "Data Analysts",
+ "description": "This group would be responsible for analyzing data collected from sensors.",
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-15T09:41:42.860481Z",
+ "updated_at": "2023-06-15T10:17:56.475241Z",
+ "updated_by": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "status": "enabled"
+}
+
+Assign user to a group
+curl -sSiX POST http://localhost/users/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "<user_id>",
+ "object": "<group_id>",
+ "actions": ["<member_action>"]
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/users/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "object": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "actions": ["g_list", "c_list"]
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 10:19:59 GMT
+Content-Type: application/json
+Content-Length: 0
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+You can get all users assigned to a group.
+If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, identity
, and tag
as query parameters.
++Must take into consideration the user identified by the
+user_token
needs to be assigned to the same group identified bygroup_id
withg_list
action or be the owner of the group identified bygroup_id
.
curl -sSiX GET http://localhost/groups/<group_id>/members -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX GET http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e/members -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 11:21:29 GMT
+Content-Type: application/json
+Content-Length: 318
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 10,
+ "total": 1,
+ "members": [
+ {
+ "id": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "name": "Jane Doe",
+ "tags": ["male", "developer"],
+ "credentials": { "identity": "updated.jane.doe@gmail.com" },
+ "metadata": { "location": "london" },
+ "created_at": "2023-06-14T13:46:47.322648Z",
+ "updated_at": "2023-06-14T13:59:53.422595Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+Unassign user from group
+curl -sSiX DELETE http://localhost/users/policies/<subject_id>/<object_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX DELETE http://localhost/users/policies/1890c034-7ef9-4cde-83df-d78ea1d4d281/2766ae94-9a08-4418-82ce-3b91cf2ccd3e -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 204 No Content
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 11:25:27 GMT
+Content-Type: application/json
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+Only actions defined on Predefined Policies section are allowed.
+curl -sSiX POST http://localhost/users/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "<user_id>",
+ "object": "<group_id>",
+ "actions": ["<actions>", "[actions]"]
+}
+EOF
+
+curl -sSiX POST http://localhost/things/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "<thing_id>",
+ "object": "<channel_id>",
+ "actions": ["<actions>", "[actions]"]
+}
+EOF
+
+curl -sSiX POST http://localhost/things/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "<user_id>",
+ "object": "<channel_id>",
+ "actions": ["<actions>", "[actions]"]
+ "external": true
+}
+EOF
+
+For example:
+curl -sSiX POST http://localhost/users/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "object": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "actions": ["g_add", "c_list"]
+}
+EOF
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 11:26:50 GMT
+Content-Type: application/json
+Content-Length: 0
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+Only actions defined on Predefined Policies section are allowed.
+curl -sSiX PUT http://localhost/users/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "<user_id>",
+ "object": "<group_id>",
+ "actions": ["<actions>", "[actions]"]
+}
+EOF
+
+curl -sSiX PUT http://localhost/things/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "<thing_id> | <user_id>",
+ "object": "<channel_id>",
+ "actions": ["<actions>", "[actions]"]
+}
+EOF
+
+For example:
+curl -sSiX PUT http://localhost/users/policies -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d @- << EOF
+{
+ "subject": "1890c034-7ef9-4cde-83df-d78ea1d4d281",
+ "object": "2766ae94-9a08-4418-82ce-3b91cf2ccd3e",
+ "actions": ["g_list", "c_list"]
+}
+EOF
+
+HTTP/1.1 204 No Content
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 11:27:19 GMT
+Content-Type: application/json
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+Only policies defined on Predefined Policies section are allowed.
+curl -sSiX DELETE http://localhost/users/policies/<user_id>/<channel_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+curl -sSiX DELETE http://localhost/things/policies/<thing_id>/<channel_id> -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -sSiX DELETE http://localhost/users/policies/1890c034-7ef9-4cde-83df-d78ea1d4d281/2766ae94-9a08-4418-82ce-3b91cf2ccd3e -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+HTTP/1.1 204 No Content
+Server: nginx/1.23.3
+Date: Thu, 15 Jun 2023 11:28:31 GMT
+Content-Type: application/json
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+
+
+
+
+
+
+ Mainflux IoT platform is comprised of the following services:
+Service | +Description | +
---|---|
users | +Manages platform's users and auth concerns in regards to users and groups | +
things | +Manages platform's things, channels and auth concerns in regards to things and channels | +
http-adapter | +Provides an HTTP interface for sending messages via HTTP | +
mqtt-adapter | +Provides an MQTT and MQTT over WS interface for sending and receiving messages via MQTT | +
ws-adapter | +Provides a WebSocket interface for sending and receiving messages via WS | +
coap-adapter | +Provides a CoAP interface for sending and receiving messages via CoAP | +
opcua-adapter | +Provides an OPC-UA interface for sending and receiving messages via OPC-UA | +
lora-adapter | +Provides a LoRa Server forwarder for sending and receiving messages via LoRa | +
mainflux-cli | +Command line interface | +
The platform is built around 2 main entities: users and things.
+User
represents the real (human) user of the system. Users are represented via their email address used as their identity, and password used as their secret, which they use as platform access credentials in order to obtain an access token. Once logged into the system, a user can manage their resources (i.e. groups, things and channels) in CRUD fashion and define access control policies by connecting them.
Group
represents a logical groupping of users. It is used to simplify access control management by allowing users to be grouped together. When assigning a user to a group, we create a policy that defines what that user can do with the resources of the group. This way, a user can be assigned to multiple groups, and each group can have multiple users assigned to it. Users in one group have access to other users in the same group as long as they have the required policy. A group can also be assigned to another group, thus creating a group hierarchy. When assigning a user to a group we create a policy that defines what that user can do with the group and other users in the group.
Thing
represents devices (or applications) connected to Mainflux that uses the platform for message exchange with other "things".
Channel
represents a communication channel. It serves as a message topic that can be consumed by all of the things connected to it. It also servers as grouping mechanism for things. A thing can be connected to multiple channels, and a channel can have multiple things connected to it. A user can be connected to a channel as well, thus allowing them to have an access to the messages published to that channel and also things connected to that channel with the required policy. A channel can also be assigned to another channel, thus creating a channel hierarchy. Both things and users can be assigned to a channel. When assigning a thing to a channel, we create a policy that defines what that thing can do to the channel, for example reading or writing messages to it. When assigning a user to a channel, we create a policy that defines what that user can do with the channel and things connected to it, hereby enabling the sharing of things between users.
Mainflux uses NATS as its default messaging backbone, due to its lightweight and performant nature. You can treat its subjects as physical representation of Mainflux channels, where subject name is constructed using channel unique identifier. Mainflux also provides the ability to change your default message broker to RabbitMQ, VerneMQ or Kafka.
+In general, there is no constrained put on content that is being exchanged through channels. However, in order to be post-processed and normalized, messages should be formatted using SenML.
+Mainflux platform can be run on the edge as well. Deploying Mainflux on a gateway makes it able to collect, store and analyze data, organize and authenticate devices. To connect Mainflux instances running on a gateway with Mainflux in a cloud we can use two gateway services developed for that purpose:
+ +Running Mainflux on gateway moves computation from cloud towards the edge thus decentralizing IoT system. Since we can deploy same Mainflux code on gateway and in the cloud there are many benefits but the biggest one is easy deployment and adoption - once engineers understand how to deploy and maintain the platform, they will be able to apply those same skills to any part of the edge-fog-cloud continuum. This is because the platform is designed to be consistent, making it easy for engineers to move between them. This consistency will save engineers time and effort, and it will also help to improve the reliability and security of the platform. Same set of tools can be used, same patches and bug fixes can be applied. The whole system is much easier to reason about, and the maintenance is much easier and less costly.
+ + + + + + +For user authentication Mainflux uses Authentication keys. There are two types of authentication keys:
+Authentication keys are represented and distributed by the corresponding JWT. User keys are issued when user logs in. Each user request (other than registration and login) contains user key that is used to authenticate the user.
+Recovery key is the password recovery key. It's short-lived token used for password recovery process.
+The following actions are supported:
+By default, Mainflux uses Mainflux Thing secret for authentication. The Thing secret is a secret key that's generated at the Thing creation. In order to authenticate, the Thing needs to send its secret with the message. The way the secret is passed depends on the protocol used to send a message and differs from adapter to adapter. For more details on how this secret is passed around, please check out messaging section. This is the default Mainflux authentication mechanism and this method is used if the composition is started using the following command:
+docker-compose -f docker/docker-compose.yml up
+
+In most of the cases, HTTPS, WSS, MQTTS or secure CoAP are secure enough. However, sometimes you might need an even more secure connection. Mainflux supports mutual TLS authentication (mTLS) based on X.509 certificates. By default, the TLS protocol only proves the identity of the server to the client using the X.509 certificate and the authentication of the client to the server is left to the application layer. TLS also offers client-to-server authentication using client-side X.509 authentication. This is called two-way or mutual authentication. Mainflux currently supports mTLS over HTTP, WS, MQTT and MQTT over WS protocols. In order to run Docker composition with mTLS turned on, you can execute the following command from the project root:
+AUTH=x509 docker-compose -f docker/docker-compose.yml up -d
+
+Mutual authentication includes client-side certificates. Certificates can be generated using the simple script provided here. In order to create a valid certificate, you need to create Mainflux thing using the process described in the provisioning section. After that, you need to fetch created thing secret. Thing secret will be used to create x.509 certificate for the corresponding thing. To create a certificate, execute the following commands:
+cd docker/ssl
+make ca CN=<common_name> O=<organization> OU=<organizational_unit> emailAddress=<email_address>
+make server_cert CN=<common_name> O=<organization> OU=<organizational_unit> emailAddress=<email_address>
+make thing_cert THING_SECRET=<thing_secret> CRT_FILE_NAME=<cert_name> O=<organization> OU=<organizational_unit> emailAddress=<email_address>
+
+These commands use OpenSSL tool, so please make sure that you have it installed and set up before running these commands. The default values for Makefile variables are
+CRT_LOCATION = certs
+THING_SECRET = d7cc2964-a48b-4a6e-871a-08da28e7883d
+O = Mainflux
+OU = mainflux
+EA = info@mainflux.com
+CN = localhost
+CRT_FILE_NAME = thing
+
+Normally, in order to get things running, you will need to specify only THING_SECRET
. The other variables are not mandatory and the termination should work with the default values.
make ca
will generate a self-signed certificate that will later be used as a CA to sign other generated certificates. CA will expire in 3 years.make server_cert
will generate and sign (with previously created CA) server cert, which will expire after 1000 days. This cert is used as a Mainflux server-side certificate in usual TLS flow to establish HTTPS or MQTTS connection.make thing_cert
will finally generate and sign a client-side certificate and private key for the thing.In this example <thing_secret>
represents secret of the thing and <cert_name>
represents the name of the certificate and key file which will be saved in docker/ssl/certs
directory. Generated Certificate will expire after 2 years. The key must be stored in the x.509 certificate CN
field. This script is created for testing purposes and is not meant to be used in production. We strongly recommend avoiding self-signed certificates and using a certificate management tool such as Vault for the production.
Once you have created CA and server-side cert, you can spin the composition using:
+AUTH=x509 docker-compose -f docker/docker-compose.yml up -d
+
+Then, you can create user and provision things and channels. Now, in order to send a message from the specific thing to the channel, you need to connect thing to the channel and generate corresponding client certificate using aforementioned commands. To publish a message to the channel, thing should send following request:
+const WebSocket = require("ws");
+// Do not verify self-signed certificates if you are using one.
+process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
+// Replace <channel_id> and <thing_secret> with real values.
+const ws = new WebSocket(
+ "wss://localhost/ws/channels/<channel_id>/messages?authorization=<thing_secret>",
+ // This is ClientOptions object that contains client cert and client key in the form of string. You can easily load these strings from cert and key files.
+ {
+ cert: `-----BEGIN CERTIFICATE-----....`,
+ key: `-----BEGIN RSA PRIVATE KEY-----.....`,
+ }
+);
+ws.on("open", () => {
+ ws.send("something");
+});
+ws.on("message", (data) => {
+ console.log(data);
+});
+ws.on("error", (e) => {
+ console.log(e);
+});
+
+As you can see, Authorization
header does not have to be present in the HTTP request, since the secret is present in the certificate. However, if you pass Authorization
header, it must be the same as the key in the cert. In the case of MQTTS, password
filed in CONNECT message must match the key from the certificate. In the case of WSS, Authorization
header or authorization
query parameter must match cert key.
curl -s -S -i --cacert docker/ssl/certs/ca.crt --cert docker/ssl/certs/<thing_cert_name>.crt --key docker/ssl/certs/<thing_cert_key>.key -X POST -H "Content-Type: application/senml+json" https://localhost/http/channels/<channel_id>/messages -d '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]'
+
+mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages -h localhost -p 8883 --cafile docker/ssl/certs/ca.crt --cert docker/ssl/certs/<thing_cert_name>.crt --key docker/ssl/certs/<thing_cert_key>.key -m '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]'
+
+mosquitto_sub -u <thing_id> -P <thing_secret> --cafile docker/ssl/certs/ca.crt --cert docker/ssl/certs/<thing_cert_name>.crt --key docker/ssl/certs/<thing_cert_key>.key -t channels/<channel_id>/messages -h localhost -p 8883
+
+
+
+
+
+
+
+ Mainflux uses policies to control permissions on entities: users, things, groups and channels. Under the hood, Mainflux uses its own fine grained access control list. Policies define permissions for the entities. For example, which user has access to a specific thing. Such policies have three main components: subject, object, and action.
+To put it briefly:
+Subject: As the name suggests, it is the subject that will have the policy such as users or things. Mainflux uses entity UUID on behalf of the real entities.
+Object: Objects are Mainflux entities (e.g. channels or group ) represented by their UUID.
+Action: This is the action that the subject wants to do on the object. This is one of the supported actions (read, write, update, delete, list or add)
+Above this we have a domain specifier called entityType. This either specific group level access or client level acess. With client entity a client can have an action to another client in the same group. While group entity a client has an action to a group i.e direct association.
+All three components create a single policy.
+// Policy represents an argument struct for making policy-related function calls.
+
+type Policy struct {
+ Subject string `json:"subject"`
+ Object string `json:"object"`
+ Actions []string `json:"actions"`
+}
+
+var examplePolicy = Policy{
+ Subject: userID,
+ Object: groupID,
+ Actions: []string{groupListAction},
+}
+
+Policies handling initial implementation are meant to be used on the Group level.
+There are three types of policies:
+m_ Policy represents client rights to send and receive messages to a channel. Only channel members with corresponding rights can publish or receive messages to/from the channel. m_read and m_write are the only supported actions. With m_read the client can read messages from the channel. With m_write the client can write messages to the channel.
+g_ Policy represents the client's rights to modify the group/channel itself. Only group/channel members with correct rights can modify or update the group/channel, or add/remove members to/from the group. g_add, g_list, g_update and g_delete are the only supported actions. With g_add the client can add members to the group/channel. With g_list the client can list the group/channel and its members. With g_update the client can update the group/channel. With g_delete the client can delete the group/channel.
+Finally, the c_ policy represents the rights the member has over other members of the group/channel. Only group/channel members with correct rights can modify or update other members of the group/channel. c_list, c_update, c_share and c_delete are the only supported actions. With c_list the client can list other members of the group/channel. With c_update the client can update other members of the group/channel. With c_share the client can share the group/channel with other clients. With c_delete the client can delete other members of the group/channel.
+By default, mainflux adds listing action to c_ and g_ policies. This means that all members of the group/channel can list the its members. When adding a new member to a group with g_add, g_update or g_delete action, mainflux will automatically add g_list action to the new member's policy. This means that the new member will be able to list the group/channel. When adding a new member to a group/channel with c_update or c_delete action, mainflux will automatically add c_list action to the new member's policy. This means that the new member will be able to list the members of the group/channel.
+The rules are specified in the policies association table. The table looks like this:
+subject | +object | +actions | +
---|---|---|
clientA | +groupA | +["g_add", "g_list", "g_update", "g_delete"] | +
clientB | +groupA | +["c_list", "c_update", "c_delete"] | +
clientC | +groupA | +["c_update"] | +
clientD | +groupA | +["c_list"] | +
clientE | +groupB | +["c_list", "c_update", "c_delete"] | +
clientF | +groupB | +["c_update"] | +
clientD | +groupB | +["c_list"] | +
clientG | +groupC | +["m_read"] | +
clientH | +groupC | +["m_read", "m_write"] | +
Actions such as c_list
, and c_update
represent actions that allowed for the client with client_id
to execute over all the other clients that are members of the group with gorup_id
. Actions such as g_update
represent actions allowed for the client with client_id
to execute against a group with group_id
.
For the sake of simplicity, all the operations at the moment are executed on the group level - the group acts as a namespace in the context of authorization and is required.
+Actions for clientA
they can add members to groupA
clientA
lists groups groupA
will be listedclientA
can list members of groupA
groupA
they can change the status of groupA
Actions for clientB
when they list clients they will list clientA
, clientC
and clientD
since they are connected in the same group groupA
and they have c_list
actions.
clientA
, clientC
and clientD
since they are in the same groupA
they can change clients status of clients connected to the same group they are connected in i.e they are able to change the status of clientA
, clientC
and clientD
since they are in the same group groupA
Actions for clientC
they can update clients connected to the same group they are connected in i.e they can update clientA
, clientB
and clientD
since they are in the same groupA
Actions for clientD
when they list clients they will list clientA
, clientB
and clientC
since they are connected in the same group groupA
and they have c_list
actions and also clientE
and clientF
since they are connected to the same group groupB
and they have c_list
actions
Actions for clientE
when they list clients they will list clientF
and clientD
since they are connected in the same group groupB
and they have c_list
actions
clientF
and clientD
since they are in the same groupB
they can change clients status of clients connected to the same group they are connected in i.e they are able to change the status of clientF
and clientD
since they are in the same group groupB
Actions for clientF
they can update clients connected to the same group they are connected in i.e they can update clientE
, and clientD
since they are in the same groupB
Actions for clientG
they can read messages posted in group groupC
Actions for clientH
they can read from groupC
and write messages to groupC
If the user has no such policy, the operation will be denied; otherwise, the operation will be allowed.
+In order to check whether a user has the policy or not, Mainflux makes a gRPC call to policies API, then policies sub-service handles the checking existence of the policy.
+All policies are stored in the Postgres Database. The database responsible for storing all policies is deployed along with the Mainflux.
+Mainflux comes with predefined policies.
+<admin_id>
has admin
role as part of its description.Things
: c_update
, c_list
, c_share
and c_delete
.c_update
, c_list
and c_delete
policies on the Thing
since they are the owner.c_list
policy on that thing.c_update
policy on that thing.c_share
policy on that thing.c_delete
policy on that thing.g_add
, g_update
, g_list
and g_delete
policy on the group.You can add policies as well through an HTTP endpoint. Only admin or member with g_add
policy to the object can use this endpoint. Therefore, you need an authentication token.
user_token must belong to the user.
+++Must-have: user_token, group_id, user_id and policy_actions
+
curl -isSX POST 'http://localhost/users/policies' -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d '{"subject": "<user_id>", "object": "<group_id>", "actions": ["<action_1>", ..., "<action_N>"]}'
+
+For example:
+curl -isSX POST 'http://localhost/users/policies' -H "Content-Type: application/json" -H "Authorization: Bearer $USER_TOKEN" -d '{"subject": "0b530292-3c1d-4c7d-aff5-b141b5c5d3e9", "object": "0a4a2c33-2d0e-43df-b51c-d905aba99e17", "actions": ["c_list", "g_list"]}'
+
+HTTP/1.1 201 Created
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:40:06 GMT
+Content-Type: application/json
+Content-Length: 0
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+++Must-have: user_token, group_id, user_id and policy_actions
+
curl -isSX PUT 'http://localhost/users/policies' -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" -d '{"subject": "<user_id>", "object": "<group_id>", "actions": ["<action_1>", ..., "<action_N>"]}'
+
+For example:
+curl -isSX PUT 'http://localhost/users/policies' -H "Content-Type: application/json" -H "Authorization: Bearer $USER_TOKEN" -d '{"subject": "0b530292-3c1d-4c7d-aff5-b141b5c5d3e9", "object": "0a4a2c33-2d0e-43df-b51c-d905aba99e17", "actions": ["c_delete"]}'
+
+HTTP/1.1 204 No Content
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:41:00 GMT
+Content-Type: application/json
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+++Must-have: user_token
+
curl -isSX GET 'http://localhost/users/policies' -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>"
+
+For example:
+curl -isSX GET 'http://localhost/users/policies' -H "Content-Type: application/json" -H "Authorization: Bearer $USER_TOKEN"
+
+HTTP/1.1 200 OK
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:41:32 GMT
+Content-Type: application/json
+Content-Length: 305
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+{
+ "limit": 10,
+ "offset": 0,
+ "total": 1,
+ "policies": [
+ {
+ "owner_id": "94939159-d129-4f17-9e4e-cc2d615539d7",
+ "subject": "0b530292-3c1d-4c7d-aff5-b141b5c5d3e9",
+ "object": "0a4a2c33-2d0e-43df-b51c-d905aba99e17",
+ "actions": ["c_delete"],
+ "created_at": "2023-06-14T13:40:06.582315Z",
+ "updated_at": "2023-06-14T13:41:00.636733Z"
+ }
+ ]
+}
+
+The admin can delete policies. Only policies defined on Predefined Policies section are allowed.
+++Must-have: user_token, object, subjects_ids and policies
+
curl -isSX DELETE -H "Accept: application/json" -H "Authorization: Bearer <user_token>" http://localhost/users/policies -d '{"subject": "user_id", "object": "<group_id>"}'
+
+For example:
+curl -isSX DELETE -H 'Accept: application/json' -H "Authorization: Bearer $USER_TOKEN" http://localhost/users/policies -d '{"subject": "0b530292-3c1d-4c7d-aff5-b141b5c5d3e9", "object": "0a4a2c33-2d0e-43df-b51c-d905aba99e17"}'
+
+HTTP/1.1 204 No Content
+Server: nginx/1.23.3
+Date: Wed, 14 Jun 2023 13:43:46 GMT
+Content-Type: application/json
+Connection: keep-alive
+Access-Control-Expose-Headers: Location
+
+If you delete policies, the policy will be removed from the policy storage. Further authorization checks related to that policy will fail.
+ + + + + + +MZbench is open-source tool for that can generate large traffic and measure performance of the application. MZBench is distributed, cloud-aware benchmarking tool that can seamlessly scale to millions of requests. It's originally developed by satori-com but we will use mzbench fork because it can run with newest Erlang releases and the original MzBench repository is not maintained anymore.
+We will describe installing MZBench server on Ubuntu 18.04 (this can be on your PC or some external cloud server, like droplet on Digital Ocean)
+Install latest OTP/Erlang (it's version 22.3 for me)
+sudo apt update
+sudo apt install erlang
+
+For running this tool you will also need libz-dev package:
+sudo apt-get update
+sudo apt-get install libz-dev
+
+and pip:
+sudo apt install python-pip
+
+Clone mzbench tool and install the requirements:
+git clone https://github.com/mzbench/mzbench
+cd mzbench
+sudo pip install -r requirements.txt
+
+This should be enough for installing MZBench, and you can now start MZBench server with this CLI command:
+./bin/mzbench start_server
+
+The MZBench CLI lets you control the server and benchmarks from the command line.
+Another way of using MZBench is over Dashboard. After starting server you should check dashboard on http://localhost:4800
.
Note that if you are installing MZBench on external server (i.e. Digital Ocean droplet), that you'll be able to reach MZBench dashboard on your server's IP address:4800, if you previously:
+network_interface
from 127.0.0.1
to 0.0.0.0
in configuration file. Default configuration file location is ~/.config/mzbench/server.config
, create it from sample configuration file ~/.config/mzbench/server.config.example
4800
with ufw allow 4800
MZBench can run your test scenarios on many nodes, simultaneously. For now, you are able to run tests locally, so your nodes will be virtual nodes on machine where MZBench server is installed (your PC or DO droplet). You can try one of our MQTT scenarios that uses vmq_mzbench worker. Copy-paste scenario in MZBench dashboard, click button Environmental variables -> Add from script and add appropriate values. Because it's running locally, you should try with smaller values, for example for fan-in scenario use 100 publishers on 2 nodes. Try this before moving forward in setting up Amazon EC2 plugin.
+For larger-scale tests we will set up MZBench to run each node as one of Amazon EC2 instance with built-in plugin mzb_api_ec2_plugin.
+This is basic architecture when running MZBench:
+ +Every node that runs your scenarios will be one of Amazon EC2 instance; plus one more additional node — the director node. The director doesn't run scenarios, it collects the metrics from the other nodes and runs post and pre hooks. So, if you want to run jobs on 10 nodes, actually 11 EC2 instances will be created. All instances will be automatically terminated when the test finishes.
+We will use one of ready-to-use Amazon Machine Images (AMI) with all necessary dependencies. We will choose AMI with OTP 22, because that is the version we have on MZBench server. So, we will search for MZBench-erl22
AMI and find one with id ami-03a169923be706764
available in us-west-1b
zone. If you have chosen this AMI, everything you do from now must be in us-west-1 zone. We must have IAM user with AmazonEC2FullAccess
and IAMFullAccess
permissions policies, and his access_key_id
and secret_access_key
goes to configuration file. In EC2 dashboard, you must create new security group MZbench_cluster
where you will add inbound rules to open ssh and TCP ports 4801-4804. Also, in EC2 dashboard go to section key pairs
, click Actions
-> Import key pair
and upload public key you have on your MZBench server in ~/.ssh/id_rsa.pub
(if you need to create new, run ssh-keygen
and follow instructions). Give it a name on EC2 dashboard, put that name (key_name
) and path (keyfile
) in configuration file.
[
+{mzbench_api, [
+{network_interface,"0.0.0.0"},
+{keyfile, "~/.ssh/id_rsa"},
+{cloud_plugins, [
+ {local,#{module => mzb_dummycloud_plugin}},
+ {ec2, #{module => mzb_api_ec2_plugin,
+ instance_spec => [
+ {image_id, "ami-03a169923be706764"},
+ {group_set, ["MZbench_cluster"]},
+ {instance_type, "t2.micro"},
+ {availability_zone, "us-west-1b"},
+ {iam_instance_profile_name, "mzbench"},
+ {key_name, "key_pair_name"}
+ ],
+ config => [
+ {ec2_host, "ec2.us-west-1.amazonaws.com"},
+ {access_key_id, "IAM_USER_ACCESS_KEY_ID"},
+ {secret_access_key, "IAM_USER_SECRET_ACCESS_KEY"}
+ ],
+ instance_user => "ec2-user"
+ }}
+ ]
+}
+]}].
+
+There is both local
and ec2
plugin in this configuration file, so you can choose to run tests on either of them. Default path for configuration file is ~/.config/mzbench/server.config
, if it's somewhere else, server is starting with:
./bin/mzbench start_server --config <config_file>
+
+Note that every time you update the configuration you have to restart the server:
+./bin/mzbench restart_server
+
+Testing environment to be determined.
+In this scenario, large number of requests are sent to HTTP adapter service every second. This test checks how much time HTTP adapter needs to respond to each request.
+TBD
+In this scenario, large number of requests are sent to things service to create things and than to retrieve their data. This test checks how much time things service needs to respond to each request.
+TBD
+ + + + + + +Bootstrapping
refers to a self-starting process that is supposed to proceed without external input. Mainflux platform supports bootstrapping process, but some of the preconditions need to be fulfilled in advance. The device can trigger a bootstrap when:s
++Bootstrapping and provisioning are two different procedures. Provisioning refers to entities management while bootstrapping is related to entity configuration.
+
Bootstrapping procedure is the following:
++1) Configure device with Bootstrap service URL, an external key and external ID
++ ++Optionally create Mainflux channels if they don't exist
+ +Optionally create Mainflux thing if it doesn't exist
+
+2) Upload configuration for the Mainflux thing
++3) Bootstrap - send a request for the configuration
++4) Connect/disconnect thing from channels, update or remove configuration
+The configuration of Mainflux thing consists of three major parts:
+Also, the configuration contains an external ID and external key, which will be explained later. +In order to enable the thing to start bootstrapping process, the user needs to upload a valid configuration for that specific thing. This can be done using the following HTTP request:
+curl -s -S -i -X POST -H "Authorization: Bearer <user_token>" -H "Content-Type: application/json" http://localhost:9013/things/configs -d '{
+ "external_id":"09:6:0:sb:sa",
+ "thing_id": "7d63b564-3092-4cda-b441-e65fc1f285f0",
+ "external_key":"key",
+ "name":"some",
+ "channels":[
+ "78c9b88c-b2c4-4d58-a973-725c32194fb3",
+ "c4d6edb2-4e23-49f2-b6ea-df8bc6769591"
+],
+ "content": "config...",
+ "client_cert": "PEM cert",
+ "client_key": "PEM client cert key",
+ "ca_cert": "PEM CA cert"
+}'
+
+In this example, channels
field represents the list of Mainflux channel IDs the thing is connected to. These channels need to be provisioned before the configuration is uploaded. Field content
represents custom configuration. This custom configuration contains parameters that can be used to set up the thing. It can also be empty if no additional set up is needed. Field name
is human readable name and thing_id
is an ID of the Mainflux thing. This field is not required. If thing_id
is empty, corresponding Mainflux thing will be created implicitly and its ID will be sent as a part of Location
header of the response. Fields client_cert
, client_key
and ca_cert
represent PEM or base64-encoded DER client certificate, client certificate key and trusted CA, respectively.
There are two more fields: external_id
and external_key
. External ID represents an ID of the device that corresponds to the given thing. For example, this can be a MAC address or the serial number of the device. The external key represents the device key. This is the secret key that's safely stored on the device and it is used to authorize the thing during the bootstrapping process. Please note that external ID and external key and Mainflux ID and Mainflux key are completely different concepts. External id and key are only used to authenticate a device that corresponds to the specific Mainflux thing during the bootstrapping procedure. As Configuration optionally contains client certificate and issuing CA, it's possible that device is not able to establish TLS encrypted communication with Mainflux before bootstrapping. For that purpose, Bootstrap service exposes endpoint used for secure bootstrapping which can be used regardless of protocol (HTTP or HTTPS). Both device and Bootstrap service use a secret key to encrypt the content. Encryption is done as follows:
++Please have on mind that secret key is passed to the Bootstrap service as an environment variable. As security measurement, Bootstrap service removes this variable once it reads it on startup. However, depending on your deployment, this variable can still be visible as a part of your configuration or terminal emulator environment.
+
For more details on which encryption mechanisms are used, please take a look at the implementation.
+Currently, the bootstrapping procedure is executed over the HTTP protocol. Bootstrapping is nothing else but fetching and applying the configuration that corresponds to the given Mainflux thing. In order to fetch the configuration, the thing needs to send a bootstrapping request:
+curl -s -S -i -H "Authorization: Thing <external_key>" http://localhost:9013/things/bootstrap/<external_id>
+
+The response body should look something like:
+{
+ "thing_id":"7d63b564-3092-4cda-b441-e65fc1f285f0",
+ "thing_key":"d0f6ff22-f521-4674-9065-e265a9376a78",
+ "channels":[
+ {
+ "id":"c4d6edb2-4e23-49f2-b6ea-df8bc6769591",
+ "name":"c1",
+ "metadata":null
+ },
+ {
+ "id":"78c9b88c-b2c4-4d58-a973-725c32194fb3",
+ "name":"c0",
+ "metadata":null
+ }
+ ],
+ "content":"cofig...",
+ "client_cert":"PEM cert",
+ "client_key":"PEM client cert key",
+ "ca_cert":"PEM CA cert"
+}
+
+The response consists of an ID and key of the Mainflux thing, the list of channels and custom configuration (content
field). The list of channels contains not just channel IDs, but the additional Mainflux channel data (name
and metadata
fields), as well.
Uploading configuration does not automatically connect thing to the given list of channels. In order to connect the thing to the channels, user needs to send the following HTTP request:
+curl -s -S -i -X PUT -H "Authorization: Bearer <user_token>" -H "Content-Type: application/json" http://localhost:9013/things/state/<thing_id> -d '{"state": 1}'
+
+In order to disconnect, the same request should be sent with the value of state
set to 0.
For more information about the Bootstrap service API, please check out the API documentation.
+ + + + + + +Provisioning is a process of configuration of an IoT platform in which system operator creates and sets-up different entities used in the platform - users, groups, channels and things.
+Issues certificates for things. Certs
service can create certificates to be used when Mainflux
is deployed to support mTLS.
+Certs
service will create certificate for valid thing ID if valid user token is passed and user is owner of the provided thing ID.
Certificate service can create certificates in two modes:
+Vault
as PKI certificate management cert
service will proxy requests to Vault
previously checking access rights and saving info on successfully created certificate.If MF_CERTS_VAULT_HOST
is empty than Development mode is on.
To issue a certificate:
+
+USER_TOKEN=`curl -s --insecure -S -X POST https://localhost/users/tokens/issue -H "Content-Type: application/json" -d '{"identity":"john.doe@email.com", "secret":"12345678"}' | grep -oP '"access_token":"\K[^"]+'`
+
+curl -s -S -X POST http://localhost:9019/certs -H "Authorization: Bearer $USER_TOKEN" -H 'Content-Type: application/json' -d '{"thing_id":<thing_id>, "rsa_bits":2048, "key_type":"rsa"}'
+
+{
+ "ThingID": "",
+ "ClientCert": "-----BEGIN CERTIFICATE-----\nMIIDmTCCAoGgAwIBAgIRANmkAPbTR1UYeYO0Id/4+8gwDQYJKoZIhvcNAQELBQAw\nVzESMBAGA1UEAwwJbG9jYWxob3N0MREwDwYDVQQKDAhNYWluZmx1eDEMMAoGA1UE\nCwwDSW9UMSAwHgYJKoZIhvcNAQkBFhFpbmZvQG1haW5mbHV4LmNvbTAeFw0yMDA2\nMzAxNDIxMDlaFw0yMDA5MjMyMjIxMDlaMFUxETAPBgNVBAoTCE1haW5mbHV4MREw\nDwYDVQQLEwhtYWluZmx1eDEtMCsGA1UEAxMkYjAwZDBhNzktYjQ2YS00NTk3LTli\nNGYtMjhkZGJhNTBjYTYyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA\ntgS2fLUWG3CCQz/l6VRQRJfRvWmdxK0mW6zIXGeeOILYZeaLiuiUnohwMJ4RiMqT\nuJbInAIuO/Tt5osfrCFFzPEOLYJ5nZBBaJfTIAxqf84Ou1oeMRll4wpzgeKx0rJO\nXMAARwn1bT9n3uky5QQGSLy4PyyILzSXH/1yCQQctdQB/Ar/UI1TaYoYlGzh7dHT\nWpcxq1HYgCyAtcrQrGD0rEwUn82UBCrnya+bygNqu0oDzIFQwa1G8jxSgXk0mFS1\nWrk7rBipsvp8HQhdnvbEVz4k4AAKcQxesH4DkRx/EXmU2UvN3XysvcJ2bL+UzMNI\njNhAe0pgPbB82F6zkYZ/XQIDAQABo2IwYDAOBgNVHQ8BAf8EBAMCB4AwHQYDVR0l\nBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMA4GA1UdDgQHBAUBAgMEBjAfBgNVHSME\nGDAWgBRs4xR91qEjNRGmw391xS7x6Tc+8jANBgkqhkiG9w0BAQsFAAOCAQEAW/dS\nV4vNLTZwBnPVHUX35pRFxPKvscY+vnnpgyDtITgZHYe0KL+Bs3IHuywtqaezU5x1\nkZo+frE1OcpRvp7HJtDiT06yz+18qOYZMappCWCeAFWtZkMhlvnm3TqTkgui6Xgl\nGj5xnPb15AOlsDE2dkv5S6kEwJGHdVX6AOWfB4ubUq5S9e4ABYzXGUty6Hw/ZUmJ\nhCTRVJ7cQJVTJsl1o7CYT8JBvUUG75LirtoFE4M4JwsfsKZXzrQffTf1ynqI3dN/\nHWySEbvTSWcRcA3MSmOTxGt5/zwCglHDlWPKMrXtjTW7NPuGL5/P9HSB9HGVVeET\nDUMdvYwgj0cUCEu3LA==\n-----END CERTIFICATE-----\n",
+ "IssuingCA": "",
+ "CAChain": null,
+ "ClientKey": "-----BEGIN RSA PRIVATE KEY-----\nMIIEowIBAAKCAQEAtgS2fLUWG3CCQz/l6VRQRJfRvWmdxK0mW6zIXGeeOILYZeaL\niuiUnohwMJ4RiMqTuJbInAIuO/Tt5osfrCFFzPEOLYJ5nZBBaJfTIAxqf84Ou1oe\nMRll4wpzgeKx0rJOXMAARwn1bT9n3uky5QQGSLy4PyyILzSXH/1yCQQctdQB/Ar/\nUI1TaYoYlGzh7dHTWpcxq1HYgCyAtcrQrGD0rEwUn82UBCrnya+bygNqu0oDzIFQ\nwa1G8jxSgXk0mFS1Wrk7rBipsvp8HQhdnvbEVz4k4AAKcQxesH4DkRx/EXmU2UvN\n3XysvcJ2bL+UzMNIjNhAe0pgPbB82F6zkYZ/XQIDAQABAoIBAALoal3tqq+/iWU3\npR2oKiweXMxw3oNg3McEKKNJSH7QoFJob3xFoPIzbc9pBxCvY9LEHepYIpL0o8RW\nHqhqU6olg7t4ZSb+Qf1Ax6+wYxctnJCjrO3N4RHSfevqSjr6fEQBEUARSal4JNmr\n0hNUkCEjWrIvrPFMHsn1C5hXR3okJQpGsad4oCGZDp2eZ/NDyvmLBLci9/5CJdRv\n6roOF5ShWweKcz1+pfy666Q8RiUI7H1zXjPaL4yqkv8eg/WPOO0dYF2Ri2Grk9OY\n1qTM0W1vi9zfncinZ0DpgtwMTFQezGwhUyJHSYHmjVBA4AaYIyOQAI/2dl5fXM+O\n9JfXpOUCgYEA10xAtMc/8KOLbHCprpc4pbtOqfchq/M04qPKxQNAjqvLodrWZZgF\nexa+B3eWWn5MxmQMx18AjBCPwbNDK8Rkd9VqzdWempaSblgZ7y1a0rRNTXzN5DFP\noiuRQV4wszCuj5XSdPn+lxApaI/4+TQ0oweIZCpGW39XKePPoB5WZiMCgYEA2G3W\niJncRpmxWwrRPi1W26E9tWOT5s9wYgXWMc+PAVUd/qdDRuMBHpu861Qoghp/MJog\nBYqt2rQqU0OxvIXlXPrXPHXrCLOFwybRCBVREZrg4BZNnjyDTLOu9C+0M3J9ImCh\n3vniYqb7S0gRmoDM0R3Zu4+ajfP2QOGLXw1qHH8CgYEAl0EQ7HBW8V5UYzi7XNcM\nixKOb0YZt83DR74+hC6GujTjeLBfkzw8DX+qvWA8lxLIKVC80YxivAQemryv4h21\nX6Llx/nd1UkXUsI+ZhP9DK5y6I9XroseIRZuk/fyStFWsbVWB6xiOgq2rKkJBzqw\nCCEQpx40E6/gsqNDiIAHvvUCgYBkkjXc6FJ55DWMLuyozfzMtpKsVYeG++InSrsM\nDn1PizQS/7q9mAMPLCOP312rh5CPDy/OI3FCbfI1GwHerwG0QUP/bnQ3aOTBmKoN\n7YnsemIA/5w16bzBycWE5x3/wjXv4aOWr9vJJ/siMm0rtKp4ijyBcevKBxHpeGWB\nWAR1FQKBgGIqAxGnBpip9E24gH894BaGHHMpQCwAxARev6sHKUy27eFUd6ipoTva\n4Wv36iz3gxU4R5B0gyfnxBNiUab/z90cb5+6+FYO13kqjxRRZWffohk5nHlmFN9K\nea7KQHTfTdRhOLUzW2yVqLi9pzfTfA6Yqf3U1YD3bgnWrp1VQnjo\n-----END RSA PRIVATE KEY-----\n",
+ "PrivateKeyType": "",
+ "Serial": "",
+ "Expire": "0001-01-01T00:00:00Z"
+}
+
+When MF_CERTS_VAULT_HOST
is set it is presumed that Vault
is installed and certs
service will issue certificates using Vault
API.
First you'll need to set up Vault
.
To setup Vault
follow steps in Build Your Own Certificate Authority (CA).
To setup certs service with Vault
following environment variables must be set:
MF_CERTS_VAULT_HOST=vault-domain.com
+MF_CERTS_VAULT_PKI_PATH=<vault_pki_path>
+MF_CERTS_VAULT_ROLE=<vault_role>
+MF_CERTS_VAULT_TOKEN=<vault_acces_token>
+
+For lab purposes you can use docker-compose and script for setting up PKI in https://github.com/mteodor/vault.
+Issuing certificate is same as in Development mode. In this mode certificates can also be revoked:
+curl -s -S -X DELETE http://localhost:9019/certs/revoke -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json' -d '{"thing_id":"c30b8842-507c-4bcd-973c-74008cef3be5"}'
+
+For more information about the Certification service API, please check out the API documentation.
+ + + + + + +Mainflux CLI makes it easy to manage users, things, channels and messages.
+CLI can be downloaded as separate asset from project realeses or it can be built with GNU Make
tool:
Get the mainflux code
+go get github.com/mainflux/mainflux
+
+Build the mainflux-cli
+make cli
+
+which will build mainflux-cli
in <project_root>/build
folder.
Executing build/mainflux-cli
without any arguments will output help with all available commands and flags:
Usage:
+ mainflux-cli [command]
+
+Available Commands:
+ bootstrap Bootstrap management
+ certs Certificates management
+ channels Channels management
+ completion Generate the autocompletion script for the specified shell
+ groups Groups management
+ health Health Check
+ help Help about any command
+ messages Send or read messages
+ policies Policies management
+ provision Provision things and channels from a config file
+ subscription Subscription management
+ things Things management
+ users Users management
+
+Flags:
+ -b, --bootstrap-url string Bootstrap service URL (default "http://localhost")
+ -s, --certs-url string Certs service URL (default "http://localhost")
+ -c, --config string Config path
+ -C, --contact string Subscription contact query parameter
+ -y, --content-type string Message content type (default "application/senml+json")
+ -e, --email string User email query parameter
+ -h, --help help for mainflux-cli
+ -p, --http-url string HTTP adapter URL (default "http://localhost/http")
+ -i, --insecure Do not check for TLS cert
+ -l, --limit uint Limit query parameter (default 10)
+ -m, --metadata string Metadata query parameter
+ -n, --name string Name query parameter
+ -o, --offset uint Offset query parameter
+ -r, --raw Enables raw output mode for easier parsing of output
+ -R, --reader-url string Reader URL (default "http://localhost")
+ -z, --state string Bootstrap state query parameter
+ -S, --status string User status query parameter
+ -t, --things-url string Things service URL (default "http://localhost")
+ -T, --topic string Subscription topic query parameter
+ -u, --users-url string Users service URL (default "http://localhost")
+
+Use "mainflux-cli [command] --help" for more information about a command.
+
+It is also possible to use the docker image mainflux/cli
to execute CLI command:
docker run -it --rm mainflux/cli -u http://<IP_SERVER> [command]
+
+For example:
+docker run -it --rm mainflux/cli -u http://192.168.160.1 users token admin@example.com 12345678
+
+{
+ "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA2MjEzMDcsImlhdCI6MTY4MDYyMDQwNywiaWRlbnRpdHkiOiJhZG1pbkBleGFtcGxlLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6ImYxZTA5Y2YxLTgzY2UtNDE4ZS1iZDBmLWU3M2I3M2MxNDM2NSIsInR5cGUiOiJhY2Nlc3MifQ.iKdBv3Ko7PKuhjTC6Xs-DvqfKScjKted3ZMorTwpXCd4QrRSsz6NK_lARG6LjpE0JkymaCMVMZlzykyQ6ZgwpA",
+ "access_type": "Bearer",
+ "refresh_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA3MDY4MDcsImlhdCI6MTY4MDYyMDQwNywiaWRlbnRpdHkiOiJhZG1pbkBleGFtcGxlLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6ImYxZTA5Y2YxLTgzY2UtNDE4ZS1iZDBmLWU3M2I3M2MxNDM2NSIsInR5cGUiOiJyZWZyZXNoIn0.-0tOtXFZi48VS-FxkCnVxnW2RUkJvqUmzRz3_EYSSKFyKealoFrv7sZIUvrdvKomnUFzXshP0EygL8vjWP1SFw"
+}
+
+You can execute each command with -h
flag for more information about that command, e.g.
mainflux-cli channels -h
+
+Response should look like this:
+Channels management: create, get, update or delete Channel and get list of Things connected or not connected to a Channel
+
+Usage:
+ mainflux-cli channels [command]
+
+Available Commands:
+ connections Connections list
+ create Create channel
+ disable Change channel status to disabled
+ enable Change channel status to enabled
+ get Get channel
+ update Update channel
+
+Flags:
+ -h, --help help for channels
+
+Global Flags:
+ -b, --bootstrap-url string Bootstrap service URL (default "http://localhost")
+ -s, --certs-url string Certs service URL (default "http://localhost")
+ -c, --config string Config path
+ -C, --contact string Subscription contact query parameter
+ -y, --content-type string Message content type (default "application/senml+json")
+ -e, --email string User email query parameter
+ -h, --help help for mainflux-cli
+ -p, --http-url string HTTP adapter URL (default "http://localhost/http")
+ -i, --insecure Do not check for TLS cert
+ -l, --limit uint Limit query parameter (default 10)
+ -m, --metadata string Metadata query parameter
+ -n, --name string Name query parameter
+ -o, --offset uint Offset query parameter
+ -r, --raw Enables raw output mode for easier parsing of output
+ -R, --reader-url string Reader URL (default "http://localhost")
+ -z, --state string Bootstrap state query parameter
+ -S, --status string User status query parameter
+ -t, --things-url string Things service URL (default "http://localhost")
+ -T, --topic string Subscription topic query parameter
+ -u, --users-url string Users service URL (default "http://localhost")
+
+
+Use "mainflux-cli channels [command] --help" for more information about a command.
+
+mainflux-cli health
+
+Response should look like this:
+{
+ "build_time": "2023-06-26_13:16:16",
+ "commit": "8589ad58f4ac30a198c101a7b8aa7ac2c54b2d05",
+ "description": "things service",
+ "status": "pass",
+ "version": "0.13.0"
+}
+
+Mainflux has two options for user creation. Either the <user_token>
is provided or not. If the <user_token>
is provided then the created user will be owned by the user identified by the <user_token>
. Otherwise, when the token is not used, since everybody can create new users, the user will not have an owner. However, the token is still required, in order to be consistent. For more details, please see Authorization page.
mainflux-cli users create <user_name> <user_email> <user_password>
+
+mainflux-cli users create <user_name> <user_email> <user_password> <user_token>
+
+mainflux-cli users token <user_email> <user_password>
+
+mainflux-cli users refreshtoken <refresh_token>
+
+mainflux-cli users get <user_id> <user_token>
+
+mainflux-cli users get all <user_token>
+
+mainflux-cli users update <user_id> '{"name":"value1", "metadata":{"value2": "value3"}}' <user_token>
+
+mainflux-cli users update tags <user_id> '["tag1", "tag2"]' <user_token>
+
+mainflux-cli users update identity <user_id> <user_email> <user_token>
+
+mainflux-cli users update owner <user_id> <owner_id> <user_token>
+
+mainflux-cli users password <old_password> <password> <user_token>
+
+mainflux-cli users enable <user_id> <user_token>
+
+mainflux-cli users disable <user_id> <user_token>
+
+mainflux-cli users profile <user_token>
+
+mainflux-cli groups create '{"name":"<group_name>","description":"<description>","parentID":"<parent_id>","metadata":"<metadata>"}' <user_token>
+
+mainflux-cli groups get <group_id> <user_token>
+
+mainflux-cli groups get all <user_token>
+
+mainflux-cli groups update '{"id":"<group_id>","name":"<group_name>","description":"<description>","metadata":"<metadata>"}' <user_token>
+
+mainflux-cli groups members <group_id> <user_token>
+
+mainflux-cli groups membership <member_id> <user_token>
+
+mainflux-cli groups assign <member_ids> <member_type> <group_id> <user_token>
+
+mainflux-cli groups unassign <member_ids> <group_id> <user_token>
+
+mainflux-cli groups enable <group_id> <user_token>
+
+mainflux-cli groups disable <group_id> <user_token>
+
+mainflux-cli things create '{"name":"myThing"}' <user_token>
+
+mainflux-cli things create '{"name":"myThing", "metadata": {"key1":"value1"}}' <user_token>
+
+mainflux-cli provision things <file> <user_token>
+
+file
- A CSV or JSON file containing thing names (must have extension .csv
or .json
)user_token
- A valid user auth token for the current systemAn example CSV file might be:
+thing1,
+thing2,
+thing3,
+
+in which the first column is thing names.
+A comparable JSON file would be
+[
+ {
+ "name": "<thing1_name>",
+ "status": "enabled"
+ },
+ {
+ "name": "<thing2_name>",
+ "status": "disabled"
+ },
+ {
+ "name": "<thing3_name>",
+ "status": "enabled",
+ "credentials": {
+ "identity": "<thing3_identity>",
+ "secret": "<thing3_secret>"
+ }
+ }
+]
+
+With JSON you can be able to specify more fields of the channels you want to create
+mainflux-cli things update <thing_id> '{"name":"value1", "metadata":{"key1": "value2"}}' <user_token>
+
+mainflux-cli things update tags <thing_id> '["tag1", "tag2"]' <user_token>
+
+mainflux-cli things update owner <thing_id> <owner_id> <user_token>
+
+mainflux-cli things update secret <thing_id> <secet> <user_token>
+
+mainflux-cli things identify <thing_secret>
+
+mainflux-cli things enable <thing_id> <user_token>
+
+mainflux-cli things disable <thing_id> <user_token>
+
+mainflux-cli things get <thing_id> <user_token>
+
+mainflux-cli things get all <user_token>
+
+mainflux-cli things get all --offset=1 --limit=5 <user_token>
+
+mainflux-cli things share <channel_id> <user_id> <allowed_actions> <user_token>
+
+mainflux-cli channels create '{"name":"myChannel"}' <user_token>
+
+mainflux-cli provision channels <file> <user_token>
+
+file
- A CSV or JSON file containing channel names (must have extension .csv
or .json
)user_token
- A valid user auth token for the current systemAn example CSV file might be:
+<channel1_name>,
+<channel2_name>,
+<channel3_name>,
+
+in which the first column is channel names.
+A comparable JSON file would be
+[
+ {
+ "name": "<channel1_name>",
+ "description": "<channel1_description>",
+ "status": "enabled"
+ },
+ {
+ "name": "<channel2_name>",
+ "description": "<channel2_description>",
+ "status": "disabled"
+ },
+ {
+ "name": "<channel3_name>",
+ "description": "<channel3_description>",
+ "status": "enabled"
+ }
+]
+
+With JSON you can be able to specify more fields of the channels you want to create
+mainflux-cli channels update '{"id":"<channel_id>","name":"myNewName"}' <user_token>
+
+mainflux-cli channels enable <channel_id> <user_token>
+
+mainflux-cli channels disable <channel_id> <user_token>
+
+mainflux-cli channels get <channel_id> <user_token>
+
+mainflux-cli channels get all <user_token>
+
+mainflux-cli channels get all --offset=1 --limit=5 <user_token>
+
+mainflux-cli things connect <thing_id> <channel_id> <user_token>
+
+mainflux-cli provision connect <file> <user_token>
+
+file
- A CSV or JSON file containing thing and channel ids (must have extension .csv
or .json
)user_token
- A valid user auth token for the current systemAn example CSV file might be
+<thing_id1>,<channel_id1>
+<thing_id2>,<channel_id2>
+
+in which the first column is thing IDs and the second column is channel IDs. A connection will be created for each thing to each channel. This example would result in 4 connections being created.
+A comparable JSON file would be
+{
+ "subjects": ["<thing_id1>", "<thing_id2>"],
+ "objects": ["<channel_id1>", "<channel_id2>"]
+}
+
+mainflux-cli things disconnect <thing_id> <channel_id> <user_token>
+
+mainflux-cli things connections <thing_id> <user_token>
+
+mainflux-cli channels connections <channel_id> <user_token>
+
+mainflux-cli messages send <channel_id> '[{"bn":"Dev1","n":"temp","v":20}, {"n":"hum","v":40}, {"bn":"Dev2", "n":"temp","v":20}, {"n":"hum","v":40}]' <thing_secret>
+
+mainflux-cli messages read <channel_id> <user_token> -R <reader_url>
+
+mainflux-cli bootstrap create '{"external_id": "myExtID", "external_key": "myExtKey", "name": "myName", "content": "myContent"}' <user_token> -b <bootstrap-url>
+
+mainflux-cli bootstrap get <thing_id> <user_token> -b <bootstrap-url>
+
+mainflux-cli bootstrap update '{"mainflux_id":"<thing_id>", "name": "newName", "content": "newContent"}' <user_token> -b <bootstrap-url>
+
+mainflux-cli bootstrap remove <thing_id> <user_token> -b <bootstrap-url>
+
+mainflux-cli bootstrap bootstrap <external_id> <external_key> -b <bootstrap-url>
+
+Mainflux CLI tool supports configuration files that contain some of the basic settings so you don't have to specify them through flags. Once you set the settings, they remain stored locally.
+mainflux-cli config <parameter> <value>
+
+Response should look like this:
+ ok
+
+This command is used to set the flags to be used by CLI in a local TOML file. The default location of the TOML file is in the same directory as the CLI binary. To change the location of the TOML file you can run the command:
+ mainflux-cli config <parameter> <value> -c "cli/file_name.toml"
+
+The possible parameters that can be set using the config command are:
+Flag | +Description | +Default | +
---|---|---|
bootstrap_url | +Bootstrap service URL | +"http://localhost:9013" | +
certs_url | +Certs service URL | +"http://localhost:9019" | +
http_adapter_url | +HTTP adapter URL | +"http://localhost/http" | +
msg_content_type | +Message content type | +"application/senml+json" | +
reader_url | +Reader URL | +"http://localhost" | +
things_url | +Things service URL | +"http://localhost:9000" | +
tls_verification | +Do not check for TLS cert | ++ |
users_url | +Users service URL | +"http://localhost:9002" | +
state | +Bootstrap state query parameter | ++ |
status | +User status query parameter | ++ |
topic | +Subscription topic query parameter | ++ |
contact | +Subscription contact query parameter | ++ |
User email query parameter | ++ | |
limit | +Limit query parameter | +10 | +
metadata | +Metadata query parameter | ++ |
name | +Name query parameter | ++ |
offset | +Offset query parameter | ++ |
raw_output | +Enables raw output mode for easier parsing of output | ++ |
Mainflux source can be found in the official Mainflux GitHub repository. You should fork this repository in order to make changes to the project. The forked version of the repository should be cloned using the following:
+git clone <forked repository> $SOMEPATH/mainflux
+cd $SOMEPATH/mainflux
+
+Note: If your $SOMEPATH
is equal to $GOPATH/src/github.com/mainflux/mainflux
, make sure that your $GOROOT
and $GOPATH
do not overlap (otherwise, go modules won't work).
Make sure that you have Protocol Buffers (version 21.12) compiler (protoc
) installed.
Go Protobuf installation instructions are here. Go Protobuf uses C bindings, so you will need to install C++ protobuf as a prerequisite. Mainflux uses Protocol Buffers for Go with Gadgets
to generate faster marshaling and unmarshaling Go code. Protocol Buffers for Go with Gadgets installation instructions can be found here.
A copy of Go (version 1.19.4) and docker template (version 3.7) will also need to be installed on your system.
+If any of these versions seem outdated, the latest can always be found in our CI script.
+Use the GNU Make tool to build all Mainflux services:
+make
+
+Build artifacts will be put in the build
directory.
++N.B. All Mainflux services are built as a statically linked binaries. This way they can be portable (transferred to any platform just by placing them there and running them) as they contain all needed libraries and do not relay on shared system libraries. This helps creating FROM scratch dockers.
+
Individual microservices can be built with:
+make <microservice_name>
+
+For example:
+make http
+
+will build the HTTP Adapter microservice.
+Dockers can be built with:
+make dockers
+
+or individually with:
+make docker_<microservice_name>
+
+For example:
+make docker_http
+
+++N.B. Mainflux creates
+FROM scratch
docker containers which are compact and small in size.N.B. The
+things-db
andusers-db
containers are built from a vanilla PostgreSQL docker image downloaded from docker hub which does not persist the data when these containers are rebuilt. Thus, rebuilding of all docker containers withmake dockers
or rebuilding thethings-db
andusers-db
containers separately withmake docker_things-db
andmake docker_users-db
respectively, will cause data loss. All your users, things, channels and connections between them will be lost! As we use this setup only for development, we don't guarantee any permanent data persistence. Though, in order to enable data retention, we have configured persistent volumes for each container that stores some data. If you want to update your Mainflux dockerized installation and want to keep your data, usemake cleandocker
to clean the containers and images and keep the data (stored in docker persistent volumes) and thenmake run
to update the images and the containers. Check the Cleaning up your dockerized Mainflux setup section for details. Please note that this kind of updating might not work if there are database changes.
In order to speed up build process, you can use commands such as:
+make dockers_dev
+
+or individually with
+make docker_dev_<microservice_name>
+
+Commands make dockers
and make dockers_dev
are similar. The main difference is that building images in the development mode is done on the local machine, rather than an intermediate image, which makes building images much faster. Before running this command, corresponding binary needs to be built in order to make changes visible. This can be done using make
or make <service_name>
command. Commands make dockers_dev
and make docker_dev_<service_name>
should be used only for development to speed up the process of image building. For deployment images, commands from section above should be used.
When the project is first cloned to your system, you will need to make sure and build all of the Mainflux services.
+make
+make dockers_dev
+
+As you develop and test changes, only the services related to your changes will need to be rebuilt. This will reduce compile time and create a much more enjoyable development experience.
+make <microservice_name>
+make docker_dev_<microservice_name>
+make run
+
+Sometimes, depending on the use case and the user's needs it might be useful to override or add some extra parameters to the docker-compose configuration. These configuration changes can be done by specifying multiple compose files with the docker-compose command line option -f as described here.
+The following format of the docker-compose
command can be used to extend or override the configuration:
docker-compose -f docker/docker-compose.yml -f docker/docker-compose.custom1.yml -f docker/docker-compose.custom2.yml up [-d]
+
+In the command above each successive file overrides the previous parameters.
+A practical example in our case would be to enable debugging and tracing in NATS so that we can see better how are the messages moving around.
+docker-compose.nats-debugging.yml
version: "3"
+
+services:
+ nats:
+ command: --debug -DV
+
+When we have the override files in place, to compose the whole infrastructure including the persistent volumes we can execute:
+docker-compose -f docker/docker-compose.yml -f docker/docker-compose.nats-debugging.yml up -d
+
+Note: Please store your customizations to some folder outside the Mainflux's source folder and maybe add them to some other git repository. You can always apply your customizations by pointing to the right file using docker-compose -f ...
.
If you want to clean your whole dockerized Mainflux installation you can use the make pv=true cleandocker
command. Please note that by default the make cleandocker
command will stop and delete all of the containers and images, but NOT DELETE persistent volumes. If you want to delete the gathered data in the system (the persistent volumes) please use the following command make pv=true cleandocker
(pv = persistent volumes). This form of the command will stop and delete the containers, the images and will also delete the persistent volumes.
The MQTT Microservice in Mainflux is special, as it is currently the only microservice written in NodeJS. It is not compiled, but node modules need to be downloaded in order to start the service:
+cd mqtt
+npm install
+
+Note that there is a shorthand for doing these commands with make
tool:
make mqtt
+
+After that, the MQTT Adapter can be started from top directory (as it needs to find *.proto
files) with:
node mqtt/mqtt.js
+
+Depending on your use case, MQTT topics, message size, the number of clients and the frequency with which the messages are sent it can happen that you experience some problems.
+Up until now it has been noticed that in case of high load, big messages and many clients it can happen that the MQTT microservice crashes with the following error:
+mainflux-mqtt | FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory
+mainflux-mqtt exited with code 137
+
+This problem is caused the default allowed memory in node (V8). V8 gives the user 1.7GB per default. To fix the problem you should add the following environment variable NODE_OPTIONS:--max-old-space-size=SPACE_IN_MB
in the environment section of the aedes.yml configuration. To find the right value for the --max-old-space-size
parameter you'll have to experiment a bit depending on your needs.
The Mainflux MQTT service uses the Aedes MQTT Broker for implementation of the MQTT related things. Therefore, for some questions or problems you can also check out the Aedes's documentation or reach out its contributors.
+If you've made any changes to .proto
files, you should call protoc
command prior to compiling individual microservices.
To do this by hand, execute:
+protoc -I. --go_out=. --go_opt=paths=source_relative pkg/messaging/*.proto
+protoc -I. --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative users/policies/*.proto
+protoc -I. --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative things/policies/*.proto
+
+A shorthand to do this via make
tool is:
make proto
+
+++N.B. This must be done once at the beginning in order to generate protobuf Go structures needed for the build. However, if you don't change any of
+.proto
files, this step is not mandatory, since all generated files are included in the repository (those are files with.pb.go
extension).
Mainflux can be compiled for ARM platform and run on Raspberry Pi or other similar IoT gateways, by following the instructions here or here as well as information found here. The environment variables GOARCH=arm
and GOARM=7
must be set for the compilation.
Cross-compilation for ARM with Mainflux make:
+GOOS=linux GOARCH=arm GOARM=7 make
+
+To run all of the tests you can execute:
+make test
+
+Dockertest is used for the tests, so to run them, you will need the Docker daemon/service running.
+Installing Go binaries is simple: just move them from build
to $GOBIN
(do not fortget to add $GOBIN
to your $PATH
).
You can execute:
+make install
+
+which will do this copying of the binaries.
+++N.B. Only Go binaries will be installed this way. The MQTT adapter is a NodeJS script and will stay in the
+mqtt
dir.
Mainflux depends on several infrastructural services, notably the default message broker, NATS and PostgreSQL database.
+Mainflux uses NATS as it's default central message bus. For development purposes (when not run via Docker), it expects that NATS is installed on the local system.
+To do this execute:
+go install github.com/nats-io/nats-server/v2@latest
+
+This will install nats-server
binary that can be simply run by executing:
nats-server
+
+If you want to change the default message broker to RabbitMQ, VerneMQ or Kafka you need to install it on the local system.
+To run using a different broker you need to set the MF_BROKER_TYPE
env variable to nats
, rabbitmq
or vernemq
during make and run process.
MF_BROKER_TYPE=<broker-type> make
+MF_BROKER_TYPE=<broker-type> make run
+
+Mainflux uses PostgreSQL to store metadata (users
, things
and channels
entities alongside with authorization tokens). It expects that PostgreSQL DB is installed, set up and running on the local system.
Information how to set-up (prepare) PostgreSQL database can be found here, and it is done by executing following commands:
+# Create `users` and `things` databases
+sudo -u postgres createdb users
+sudo -u postgres createdb things
+
+# Set-up Postgres roles
+sudo su - postgres
+psql -U postgres
+postgres=# CREATE ROLE mainflux WITH LOGIN ENCRYPTED PASSWORD 'mainflux';
+postgres=# ALTER USER mainflux WITH LOGIN ENCRYPTED PASSWORD 'mainflux';
+
+Running of the Mainflux microservices can be tricky, as there is a lot of them and each demand configuration in the form of environment variables.
+The whole system (set of microservices) can be run with one command:
+make rundev
+
+which will properly configure and run all microservices.
+Please assure that MQTT microservice has node_modules
installed, as explained in MQTT Microservice chapter.
++ + + + + + +N.B.
+make rundev
actually calls helper scriptscripts/run.sh
, so you can inspect this script for the details.
Mainflux IoT platform provides services for supporting management of devices on the edge. Typically, IoT solution includes devices (sensors/actuators) deployed in far edge and connected through some proxy gateway. Although most devices could be connected to the Mainflux directly, using gateways decentralizes system, decreases load on the cloud and makes setup less difficult. Also, gateways can provide additional data processing, filtering and storage.
+Services that can be used on gateway to enable data and control plane for edge:
+ ++ |
---|
Figure 1 - Edge services deployment | +
Figure shows edge gateway that is running Agent, Export and minimal deployment of Mainflux services. Mainflux services enable device management and MQTT protocol, NATS being a central message bus as it is the default message broker in Mainflux becomes also central message bus for other services like Agent
and Export
as well as for any new custom developed service that can be built to interface with devices with any of hardware supported interfaces on the gateway, those services would publish data to the message broker where Export
service can pick them up and send to cloud.
Agent can be used to control deployed services as well as to monitor their liveliness through subcribing to heartbeat
Message Broker subject where services should publish their liveliness status, like Export
service does.
Agent is service that is used to manage gateways that are connected to Mainflux in cloud. It provides a way to send commands to gateway and receive response via mqtt. There are two types of channels used for Agent data
and control
. Over the control
we are sending commands and receiving response from commands. Data collected from sensors connected to gateway are being sent over data
channel. Agent is able to configure itself provided that bootstrap server is running, it will retrieve configuration from bootstrap server provided few arguments - external_id
and external_key
see bootstraping.
Agent service has following features:
+bash
managed by Agent
heartbeat.>
it can remotely provide info on running services, if services are publishing heartbeat ( like Export)Before running agent we need to provision a thing and DATA and CONTROL channel. Thing that will be used as gateway representation and make bootstrap configuration. If using Mainflux UI this is done automatically when adding gateway through UI. Gateway can be provisioned with provision
service.
When you provisioned gateway as described in provision you can check results
+curl -s -S -X GET http://mainflux-domain.com:9013/things/bootstrap/<external_id> -H "Authorization: Thing <external_key>" -H 'Content-Type: application/json' |jq
+
+{
+ "thing_id": "e22c383a-d2ab-47c1-89cd-903955da993d",
+ "thing_key": "fc987711-1828-461b-aa4b-16d5b2c642fe",
+ "channels": [
+ {
+ "id": "fa5f9ba8-a1fc-4380-9edb-d0c23eaa24ec",
+ "name": "control-channel",
+ "metadata": {
+ "type": "control"
+ }
+ },
+ {
+ "id": "24e5473e-3cbe-43d9-8a8b-a725ff918c0e",
+ "name": "data-channel",
+ "metadata": {
+ "type": "data"
+ }
+ },
+ {
+ "id": "1eac45c2-0f72-4089-b255-ebd2e5732bbb",
+ "name": "export-channel",
+ "metadata": {
+ "type": "export"
+ }
+ }
+ ],
+ "content": "{\"agent\":{\"edgex\":{\"url\":\"http://localhost:48090/api/v1/\"},\"heartbeat\":{\"interval\":\"30s\"},\"log\":{\"level\":\"debug\"},\"mqtt\":{\"mtls\":false,\"qos\":0,\"retain\":false,\"skip_tls_ver\":true,\"url\":\"tcp://mainflux-domain.com:1883\"},\"server\":{\"nats_url\":\"localhost:4222\",\"port\":\"9000\"},\"terminal\":{\"session_timeout\":\"30s\"}},\"export\":{\"exp\":{\"cache_db\":\"0\",\"cache_pass\":\"\",\"cache_url\":\"localhost:6379\",\"log_level\":\"debug\",\"nats\":\"nats://localhost:4222\",\"port\":\"8172\"},\"mqtt\":{\"ca_path\":\"ca.crt\",\"cert_path\":\"thing.crt\",\"channel\":\"\",\"host\":\"tcp://mainflux-domain.com:1883\",\"mtls\":false,\"password\":\"\",\"priv_key_path\":\"thing.key\",\"qos\":0,\"retain\":false,\"skip_tls_ver\":false,\"username\":\"\"},\"routes\":[{\"mqtt_topic\":\"\",\"nats_topic\":\"channels\",\"subtopic\":\"\",\"type\":\"mfx\",\"workers\":10},{\"mqtt_topic\":\"\",\"nats_topic\":\"export\",\"subtopic\":\"\",\"type\":\"default\",\"workers\":10}]}}"
+}
+
+external_id
is usually MAC address, but anything that suits applications requirements can be usedexternal_key
is key that will be provided to agent processthing_id
is mainflux thing idchannels
is 2-element array where first channel is CONTROL and second is DATA, both channels should be assigned to thingcontent
is used for configuring parameters of agent and export service.Then to start the agent service you can do it like this
+git clone https://github.com/mainflux/agent
+make
+cd build
+
+MF_AGENT_LOG_LEVEL=debug \
+MF_AGENT_BOOTSTRAP_KEY=edged \
+MF_AGENT_BOOTSTRAP_ID=34:e1:2d:e6:cf:03 ./mainflux-agent
+
+{"level":"info","message":"Requesting config for 34:e1:2d:e6:cf:03 from http://localhost:9013/things/bootstrap","ts":"2019-12-05T04:47:24.98411512Z"}
+{"level":"info","message":"Getting config for 34:e1:2d:e6:cf:03 from http://localhost:9013/things/bootstrap succeeded","ts":"2019-12-05T04:47:24.995465239Z"}
+{"level":"info","message":"Connected to MQTT broker","ts":"2019-12-05T04:47:25.009645082Z"}
+{"level":"info","message":"Agent service started, exposed port 9000","ts":"2019-12-05T04:47:25.009755345Z"}
+{"level":"info","message":"Subscribed to MQTT broker","ts":"2019-12-05T04:47:25.012930443Z"}
+
+MF_AGENT_BOOTSTRAP_KEY
- is external_key
in bootstrap configuration.MF_AGENT_BOOSTRAP_ID
- is external_id
in bootstrap configuration.# Set connection parameters as environment variables in shell
+CH=`curl -s -S -X GET http://some-domain-name:9013/things/bootstrap/34:e1:2d:e6:cf:03 -H "Authorization: Thing <BOOTSTRAP_KEY>" -H 'Content-Type: application/json' | jq -r '.mainflux_channels[0].id'`
+TH=`curl -s -S -X GET http://some-domain-name:9013/things/bootstrap/34:e1:2d:e6:cf:03 -H "Authorization: Thing <BOOTSTRAP_KEY>" -H 'Content-Type: application/json' | jq -r .mainflux_id`
+KEY=`curl -s -S -X GET http://some-domain-name:9013/things/bootstrap/34:e1:2d:e6:cf:03 -H "Authorization: Thing <BOOTSTRAP_KEY>" -H 'Content-Type: application/json' | jq -r .mainflux_key`
+
+# Subscribe for response
+mosquitto_sub -d -u $TH -P $KEY -t "channels/${CH}/messages/res/#" -h some-domain-name -p 1883
+
+# Publish command e.g `ls`
+mosquitto_pub -d -u $TH -P $KEY -t channels/$CH/messages/req -h some-domain-name -p 1883 -m '[{"bn":"1:", "n":"exec", "vs":"ls, -l"}]'
+
+This can be checked from the UI, click on the details for gateway and below the gateway parameters you will se box with prompt, if agent
is running and it is properly connected you should be able to execute commands remotely.
If there are services that are running on same gateway as agent
and they are publishing heartbeat to the Message Broker subject heartbeat.service_name.service
+You can get the list of services by sending following mqtt message
# View services that are sending heartbeat
+mosquitto_pub -d -u $TH -P $KEY -t channels/$CH/messages/req -h some-domain-name -p 1883 -m '[{"bn":"1:", "n":"service", "vs":"view"}]'
+
+Response can be observed on channels/$CH/messages/res/#
You can send commands to services running on the same edge gateway as Agent if they are subscribed on same the Message Broker server and correct subject.
+Service commands are being sent via MQTT to topic:
+channels/<control_channel_id>/messages/services/<service_name>/<subtopic>
when messages is received Agent forwards them to the Message Broker on subject:
+commands.<service_name>.<subtopic>
Payload is up to the application and service itself.
+Edgex control messages are sent and received over control channel. MF sends a control SenML of the following form:
+[{"bn":"<uuid>:", "n":"control", "vs":"<cmd>, <param>, edgexsvc1, edgexsvc2, …, edgexsvcN"}}]
+
+For example,
+[{"bn":"1:", "n":"control", "vs":"operation, stop, edgex-support-notifications, edgex-core-data"}]
+
+Agent, on the other hand, returns a response SenML of the following form:
+[{"bn":"<uuid>:", "n":"<>", "v":"<RESP>"}]
+
+EdgeX defines SMA commands in the following RAML file
+Commands are:
+mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages/req -h localhost -m '[{"bn":"1:", "n":"control", "vs":"edgex-operation, start, edgex-support-notifications, edgex-core-data"}]'
+
+mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages/req -h localhost -m '[{"bn":"1:", "n":"control", "vs":"edgex-config, edgex-support-notifications, edgex-core-data"}]'
+
+mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages/req -h localhost -m '[{"bn":"1:", "n":"control", "vs":"edgex-metrics, edgex-support-notifications, edgex-core-data"}]'
+
+If you subscribe to
+mosquitto_sub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages/#
+
+You can observe commands and response from commands executed against edgex
+[{"bn":"1:", "n":"control", "vs":"edgex-metrics, edgex-support-notifications, edgex-core-data"}]
+[{"bn":"1","n":"edgex-metrics","vs":"{\"Metrics\":{\"edgex-core-data\":{\"CpuBusyAvg\":15.568632467698606,\"Memory\":{\"Alloc\":2040136,\"Frees\":876344,\"LiveObjects\":15134,\"Mallocs\":891478,\"Sys\":73332984,\"TotalAlloc\":80657464}},\"edgex-support-notifications\":{\"CpuBusyAvg\":14.65381169745318,\"Memory\":{\"Alloc\":961784,\"Frees\":127430,\"LiveObjects\":6095,\"Mallocs\":133525,\"Sys\":72808696,\"TotalAlloc\":11665416}}}}\n"}]
+
+Mainflux Export service can send message from one Mainflux cloud to another via MQTT, or it can send messages from edge gateway to Mainflux Cloud. Export service is subscribed to local message bus and connected to MQTT broker in the cloud. Messages collected on local message bus are redirected to the cloud. When connection is lost, if QoS2 is used, messages from the local bus are stored into file or in memory to be resent upon reconnection. Additonaly Export
service publishes liveliness status to Agent
via the Message Broker subject heartbeat.export.service
Get the code:
+go get github.com/mainflux/export
+cd $GOPATH/github.com/mainflux/export
+
+Make:
+make
+
+cd build
+./mainflux-export
+
+By default Export
service looks for config file at ../configs/config.toml
if no env vars are specified.
[exp]
+ log_level = "debug"
+ nats = "localhost:4222"
+ port = "8170"
+
+[mqtt]
+ username = "<thing_id>"
+ password = "<thing_password>"
+ ca_path = "ca.crt"
+ client_cert = ""
+ client_cert_key = ""
+ client_cert_path = "thing.crt"
+ client_priv_key_path = "thing.key"
+ mtls = "false"
+ priv_key = "thing.key"
+ retain = "false"
+ skip_tls_ver = "false"
+ url = "tcp://mainflux.com:1883"
+
+[[routes]]
+ mqtt_topic = "channel/<channel_id>/messages"
+ subtopic = "subtopic"
+ nats_topic = "export"
+ type = "default"
+ workers = 10
+
+[[routes]]
+ mqtt_topic = "channel/<channel_id>/messages"
+ subtopic = "subtopic"
+ nats_topic = "channels"
+ type = "mfx"
+ workers = 10
+
+Service will first look for MF_EXPORT_CONFIG_FILE
for configuration and if not found it will be configured with env variables and new config file specified with MF_EXPORT_CONFIG_FILE
(default value will be used if none specified) will be saved with values populated from env vars. The service is configured using the environment variables as presented in the table. Note that any unset variables will be replaced with their default values.
For values in environment variables to take effect make sure that there is no MF_EXPORT_CONFIG_FILE
file.
If you run with environment variables you can create config file:
+MF_EXPORT_PORT=8178 \
+MF_EXPORT_LOG_LEVEL=debug \
+MF_EXPORT_MQTT_HOST=tcp://localhost:1883 \
+MF_EXPORT_MQTT_USERNAME=<thing_id> \
+MF_EXPORT_MQTT_PASSWORD=<thing_secret> \
+MF_EXPORT_MQTT_CHANNEL=<channel_id> \
+MF_EXPORT_MQTT_SKIP_TLS=true \
+MF_EXPORT_MQTT_MTLS=false \
+MF_EXPORT_MQTT_CA=ca.crt \
+MF_EXPORT_MQTT_CLIENT_CERT=thing.crt \
+MF_EXPORT_MQTT_CLIENT_PK=thing.key \
+MF_EXPORT_CONFIG_FILE=export.toml \
+../build/mainflux-export&
+
+Values from environment variables will be used to populate export.toml
+port
- HTTP port where status of Export
service can be fetched.curl -X GET http://localhost:8170/health
+'{"status": "pass", "version":"0.12.1", "commit":"57cca9677721025da055c47957fc3e869e0325aa" , "description":"export service", "build_time": "2022-01-19_10:13:17"}'
+
+To establish connection to MQTT broker following settings are needed:
+username
- Mainflux password
- Mainflux url
- url of MQTT brokerAdditionally, you will need MQTT client certificates if you enable mTLS. To obtain certificates ca.crt
, thing.crt
and key thing.key
follow instructions here or here.
To setup MTLS
connection Export
service requires client certificate and mtls
in config or MF_EXPORT_MQTT_MTLS
must be set to true
. Client certificate can be provided in a file, client_cert_path
and client_cert_key_path
are used for specifying path to certificate files. If MTLS is used and no certificate file paths are specified then Export
will look in client_cert
and client_cert_key
of config file expecting certificate content stored as string.
Routes are being used for specifying which subscriber's topic(subject) goes to which publishing topic. Currently only MQTT is supported for publishing. To match Mainflux requirements mqtt_topic
must contain channel/<channel_id>/messages
, additional subtopics can be appended.
mqtt_topic
- channel/<channel_id>/messages/<custom_subtopic>
nats_topic
- Export
service will be subscribed to the Message Broker subject <nats_topic>.>
subtopic
- messages will be published to MQTT topic <mqtt_topic>/<subtopic>/<nats_subject>
, where dots in nats_subject are replaced with '/'workers
- specifies number of workers that will be used for message forwarding.type
- specifies message transformation:default
is for sending messages as they are received on the Message Broker with no transformation (so they should be in SenML or JSON format if we want to persist them in Mainflux in cloud). If you don't want to persist messages in Mainflux or you are not exporting to Mainflux cloud - message format can be anything that suits your application as message passes untransformed.mfx
is for messages that are being picked up on internal Mainflux Message Broker bus. When using Export
along with Mainflux deployed on gateway (Fig. 1) messages coming from MQTT broker that are published to the Message Broker bus are Mainflux message. Using mfx
type will extract payload and export
will publish it to mqtt_topic
. Extracted payload is SenML or JSON if we want to persist messages. nats_topic
in this case must be channels
, or if you want to pick messages from a specific channel in local Mainflux instance to be exported to cloud you can put channels.<local_mainflux_channel_id>
.Before running Export
service edit configs/config.toml
and provide username
, password
and url
username
- matches thing_id
in Mainflux cloud instancepassword
- matches thing_secret
channel
- MQTT part of the topic where to publish MQTT data (channel/<channel_id>/messages
is format of mainflux MQTT topic) and plays a part in authorization.If Mainflux and Export service are deployed on same gateway Export
can be configured to send messages from Mainflux internal Message Broker bus to Mainflux in a cloud. In order for Export
service to listen on Mainflux Message Broker deployed on the same machine Message Broker port must be exposed. Edit Mainflux docker-compose.yml. Default Message Broker, NATS, section must look like below:
nats:
+ image: nats:2.2.4
+ container_name: mainflux-nats
+ restart: on-failure
+ networks:
+ - mainflux-base-net
+ ports:
+ - 4222:4222
+
+Configuration file for Export
service can be sent over MQTT using Agent service.
mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<control_ch_id>/messages/req -h localhost -p 18831 -m "[{\"bn\":\"1:\", \"n\":\"config\", \"vs\":\"save, export, <config_file_path>, <file_content_base64>\"}]"
+
+vs="save, export, config_file_path, file_content_base64"
- vs determines where to save file and contains file content in base64 encoding payload:
b,_ := toml.Marshal(export.Config)
+payload := base64.StdEncoding.EncodeToString(b)
+
+There is a configuration.sh
script in a scripts
directory that can be used for automatic configuration and start up of remotely deployed export
. For this to work it is presumed that mainflux-export
and scripts/export_start
are placed in executable path on remote device. Additionally this script requires that remote device is provisioned following the steps described for provision service.
To run it first edit script to set parameters
+MTLS=false
+EXTERNAL_KEY='raspberry'
+EXTERNAL_ID='pi'
+MAINFLUX_HOST='mainflux.com'
+MAINFLUX_USER_EMAIL='edge@email.com'
+MAINFLUX_USER_PASSWORD='12345678'
+
+EXTERNAL_KEY
and EXTERNAL_ID
are parameters posted to /mapping
endpoint of provision
service, MAINFLUX_HOST
is location of cloud instance of Mainflux that export
should connect to and MAINFLUX_USER_EMAIL
and MAINFLUX_USER_PASSWORD
are users credentials in the cloud.
The following are steps that are an example usage of Mainflux components to connect edge with cloud. We will start Mainflux in the cloud with additional services Bootstrap and Provision. Using Bootstrap and Provision we will create a configuration for use in gateway deployment. On the gateway we will start services Agent and Export using previously created configuration.
+Start the Mainflux:
+docker-compose -f docker/docker-compose.yml up
+
+Start the Bootstrap service:
+docker-compose -f docker/addons/bootstrap/docker-compose.yml up
+
+Start the Provision service
+docker-compose -f docker/addons/provision/docker-compose.yml up
+
+Create user:
+mainflux-cli -m http://localhost:9002 users create test test@email.com 12345678
+
+Obtain user token:
+mainflux-cli -m http://localhost:9002 users token test@email.com 12345678
+
+{
+ "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY3NTEzNTIsImlhdCI6MTY4Njc1MDQ1MiwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoiYWNjZXNzIn0.AND1sm6mN2wgUxVkDhpipCoNa87KPMghGaS5-4dU0iZaqGIUhWScrEJwOahT9ts1TZSd1qEcANTIffJ_y2Pbsg",
+ "refresh_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY4MzY4NTIsImlhdCI6MTY4Njc1MDQ1MiwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoicmVmcmVzaCJ9.z3OWCHhNHNuvkzBqEAoLKWS6vpFLkIYXhH9cZogSCXd109-BbKVlLvYKmja-hkhaj_XDJKySDN3voiazBr_WTA",
+ "access_type": "Bearer"
+}
+
+USER_TOKEN=<access_token>
+
+Provision a gateway:
+curl -s -S -X POST http://localhost:9016/mapping -H "Authorization: Bearer $USER_TOKEN" -H 'Content-Type: application/json' -d '{"name":"testing", "external_id" : "54:FG:66:DC:43", "external_key":"223334fw2" }' | jq
+
+{
+ "things": [
+ {
+ "id": "88529fb2-6c1e-4b60-b9ab-73b5d89f7404",
+ "name": "thing",
+ "key": "3529c1bb-7211-4d40-9cd8-b05833196093",
+ "metadata": {
+ "external_id": "54:FG:66:DC:43"
+ }
+ }
+ ],
+ "channels": [
+ {
+ "id": "1aa3f736-0bd3-44b5-a917-a72cc743f633",
+ "name": "control-channel",
+ "metadata": {
+ "type": "control"
+ }
+ },
+ {
+ "id": "e2adcfa6-96b2-425d-8cd4-ff8cb9c056ce",
+ "name": "data-channel",
+ "metadata": {
+ "type": "data"
+ }
+ }
+ ],
+ "whitelisted": {
+ "88529fb2-6c1e-4b60-b9ab-73b5d89f7404": true
+ }
+}
+
+Parameters Provision
will use them to create a bootstrap configuration that will make a relation with Mainflux entities used for connection, authentication and authorization thing
and channel
. These parameters will be used by Agent
service on the gateway to retrieve that information and establish a connection with the cloud.
Start the NATS and Agent service:
+gnatsd
+MF_AGENT_BOOTSTRAP_ID=54:FG:66:DC:43 \
+MF_AGENT_BOOTSTRAP_KEY="223334fw2" \
+MF_AGENT_BOOTSTRAP_URL=http://localhost:9013/things/bootstrap \
+build/mainflux-agent
+{"level":"info","message":"Requesting config for 54:FG:66:DC:43 from http://localhost:9013/things/bootstrap","ts":"2020-05-07T15:50:58.041145096Z"}
+{"level":"info","message":"Getting config for 54:FG:66:DC:43 from http://localhost:9013/things/bootstrap succeeded","ts":"2020-05-07T15:50:58.120779415Z"}
+{"level":"info","message":"Saving export config file /configs/export/config.toml","ts":"2020-05-07T15:50:58.121602229Z"}
+{"level":"warn","message":"Failed to save export config file Error writing config file: open /configs/export/config.toml: no such file or directory","ts":"2020-05-07T15:50:58.121752142Z"}
+{"level":"info","message":"Client agent-88529fb2-6c1e-4b60-b9ab-73b5d89f7404 connected","ts":"2020-05-07T15:50:58.128500603Z"}
+{"level":"info","message":"Agent service started, exposed port 9003","ts":"2020-05-07T15:50:58.128531057Z"}
+
+git clone https://github.com/mainflux/export
+make
+
+Edit the configs/config.toml
setting
username
- thing from the results of provision request.password
- key from the results of provision request.mqtt_topic
- in routes set to channels/<channel_data_id>/messages
from results of provision.nats_topic
- whatever you need, export will subscribe to export.<nats_topic>
and forward messages to MQTT.host
- url of MQTT broker.[exp]
+ cache_pass = ""
+ cache_url = ""
+ log_level = "debug"
+ nats = "localhost:4222"
+ port = "8170"
+
+[mqtt]
+ ca_path = ""
+ cert_path = ""
+ host = "tcp://localhost:1883"
+ mtls = false
+ password = "3529c1bb-7211-4d40-9cd8-b05833196093"
+ priv_key_path = ""
+ qos = 0
+ retain = false
+ skip_tls_ver = false
+ username = "88529fb2-6c1e-4b60-b9ab-73b5d89f7404"
+
+[[routes]]
+ mqtt_topic = "channels/e2adcfa6-96b2-425d-8cd4-ff8cb9c056ce/messages"
+ nats_topic = ">"
+ workers = 10
+
+cd build
+./mainflux-export
+2020/05/07 17:36:57 Configuration loaded from file ../configs/config.toml
+{"level":"info","message":"Export service started, exposed port :8170","ts":"2020-05-07T15:36:57.528398548Z"}
+{"level":"debug","message":"Client export-88529fb2-6c1e-4b60-b9ab-73b5d89f7404 connected","ts":"2020-05-07T15:36:57.528405818Z"}
+
+git clone https://github.com/mainflux/agent
+go run ./examples/publish/main.go -s http://localhost:4222 export.test "[{\"bn\":\"test\"}]";
+
+We have configured route for export, nats_topic = ">"
means that it will listen to NATS
subject export.>
and mqtt_topic
is configured so that data will be sent to MQTT broker on topic channels/e2adcfa6-96b2-425d-8cd4-ff8cb9c056ce/messages
with appended NATS
subject. Other brokers can such as rabbitmq
can be used, for more detail refer to dev-guide.
In terminal where export is started you should see following message:
+{"level":"debug","message":"Published to: export.test, payload: [{\"bn\":\"test\"}]","ts":"2020-05-08T15:14:15.757298992Z"}
+
+In Mainflux mqtt
service:
mainflux-mqtt | {"level":"info","message":"Publish - client ID export-88529fb2-6c1e-4b60-b9ab-73b5d89f7404 to the topic: channels/e2adcfa6-96b2-425d-8cd4-ff8cb9c056ce/messages/export/test","ts":"2020-05-08T15:16:02.999684791Z"}
+
+
+
+
+
+
+
+ Client is a component that will replace and unify the Mainflux Things and Users services. The purpose is to represent generic client accounts. Each client is identified using its identity and secret. The client will differ from Things service to Users service but we aim to achieve 1:1 implementation between the clients whilst changing how client secret works. This includes client secret generation, usage, modification and storage
+The client entity is represented by the Client struct in Go. The fields of this struct are as follows:
+// Credentials represent client credentials: its
+// "identity" which can be a username, email, generated name;
+// and "secret" which can be a password or access token.
+type Credentials struct {
+ Identity string `json:"identity,omitempty"` // username or generated login ID
+ Secret string `json:"secret"` // password or token
+}
+
+// Client represents generic Client.
+type Client struct {
+ ID string `json:"id"`
+ Name string `json:"name,omitempty"`
+ Tags []string `json:"tags,omitempty"`
+ Owner string `json:"owner,omitempty"` // nullable
+ Credentials Credentials `json:"credentials"`
+ Metadata Metadata `json:"metadata,omitempty"`
+ CreatedAt time.Time `json:"created_at"`
+ UpdatedAt time.Time `json:"updated_at,omitempty"`
+ UpdatedBy string `json:"updated_by,omitempty"`
+ Status Status `json:"status"` // 1 for enabled, 0 for disabled
+ Role Role `json:"role,omitempty"` // 1 for admin, 0 for normal user
+}
+
+ID
is a unique identifier for each client. It is a string value.Name
is an optional field that represents the name of the client.Tags
is an optional field that represents the tags related to the client. It is a slice of string values.Owner
is an optional field that represents the owner of the client.Credentials
is a struct that represents the client credentials. It contains two fields:Identity
This is the identity of the client, which can be a username, email, or generated name.Secret
This is the secret of the client, which can be a password, secret key, or access token.Metadata
is an optional field that is used for customized describing of the client.CreatedAt
is a field that represents the time when the client was created. It is a time.Time value.UpdatedAt
is a field that represents the time when the client was last updated. It is a time.Time value.UpdatedBy
is a field that represents the user who last updated the client.Status
is a field that represents the status for the client. It can be either 1 for enabled or 0 for disabled.Role
is an optional field that represents the role of the client. It can be either 1 for admin or 0 for the user.Currently, we have the things service and the users service as 2 deployments of the client entity. The things service is used to create, read, update, and delete things. The users service is used to create, read, update, and delete users. The client entity will be used to replace the things and users services. The client entity can be serialized to and from JSON format for communication with other services.
+For grouping Mainflux entities there are groups
object in the users
service. The users groups can be used for grouping users
only. Groups are organized like a tree, group can have one parent and children. Group with no parent is root of the tree.
In order to be easily integratable system, Mainflux is using Redis Streams as an event log for event sourcing. Services that are publishing events to Redis Streams are users
service, things
service, bootstrap
service and mqtt
adapter.
For every operation users
service will generate new event and publish it to Redis Stream called mainflux.users
. Every event has its own event ID that is automatically generated and operation
field that can have one of the following values:
user.create
for user creationuser.update
for user updateuser.remove
for user change of stateuser.view
for user viewuser.view_profile
for user profile viewuser.list
for listing usersuser.list_by_group
for listing users by groupuser.identify
for user identificationuser.generate_reset_token
for generating reset tokenuser.issue_token
for issuing tokenuser.refresh_token
for refreshing tokenuser.reset_secret
for resetting secretuser.send_password_reset
for sending password resetgroup.create
for group creationgroup.update
for group updategroup.remove
for group change of stategroup.view
for group viewgroup.list
for listing groupsgroup.list_by_user
for listing groups by userpolicy.authorize
for policy authorizationpolicy.add
for policy creationpolicy.update
for policy updatepolicy.remove
for policy deletionpolicy.list
for listing policiesBy fetching and processing these events you can reconstruct users
service state. If you store some of your custom data in metadata
field, this is the perfect way to fetch it and process it. If you want to integrate through docker-compose.yml you can use mainflux-es-redis
service. Just connect to it and consume events from Redis Stream named mainflux.users
.
Whenever user is created, users
service will generate new create
event. This event will have the following format:
1) "1693307171926-0"
+2) 1) "occurred_at"
+ 2) "1693307171925834295"
+ 3) "operation"
+ 4) "user.create"
+ 5) "id"
+ 6) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 7) "status"
+ 8) "enabled"
+ 9) "created_at"
+ 10) "2023-08-29T11:06:11.914074Z"
+ 11) "name"
+ 12) "-dry-sun"
+ 13) "metadata"
+ 14) "{}"
+ 15) "identity"
+ 16) "-small-flower@email.com"
+
+As you can see from this example, every odd field represents field name while every even field represents field value. This is standard event format for Redis Streams. If you want to extract metadata
field from this event, you'll have to read it as string first and then you can deserialize it to some structured format.
Whenever user is viewed, users
service will generate new view
event. This event will have the following format:
1) "1693307172248-0"
+2) 1) "name"
+ 2) "-holy-pond"
+ 3) "owner"
+ 4) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 5) "created_at"
+ 6) "2023-08-29T11:06:12.032254Z"
+ 7) "status"
+ 8) "enabled"
+ 9) "operation"
+ 10) "user.view"
+ 11) "id"
+ 12) "56d2a797-dcb9-4fab-baf9-7c75e707b2c0"
+ 13) "identity"
+ 14) "-snowy-wave@email.com"
+ 15) "metadata"
+ 16) "{}"
+ 17) "occurred_at"
+ 18) "1693307172247989798"
+
+Whenever user profile is viewed, users
service will generate new view_profile
event. This event will have the following format:
1) "1693308867001-0"
+2) 1) "id"
+ 2) "64fd20bf-e8fb-46bf-9b64-2a6572eda21b"
+ 3) "name"
+ 4) "admin"
+ 5) "identity"
+ 6) "admin@example.com"
+ 7) "metadata"
+ 8) "{\"role\":\"admin\"}"
+ 9) "created_at"
+ 10) "2023-08-29T10:55:23.048948Z"
+ 11) "status"
+ 12) "enabled"
+ 13) "occurred_at"
+ 14) "1693308867001792403"
+ 15) "operation"
+ 16) "user.view_profile"
+
+Whenever user list is fetched, users
service will generate new list
event. This event will have the following format:
1) "1693307172254-0"
+2) 1) "status"
+ 2) "enabled"
+ 3) "occurred_at"
+ 4) "1693307172254687479"
+ 5) "operation"
+ 6) "user.list"
+ 7) "total"
+ 8) "0"
+ 9) "offset"
+ 10) "0"
+ 11) "limit"
+ 12) "10"
+
+Whenever user list by group is fetched, users
service will generate new list_by_group
event. This event will have the following format:
1) "1693308952544-0"
+2) 1) "operation"
+ 2) "user.list_by_group"
+ 3) "total"
+ 4) "0"
+ 5) "offset"
+ 6) "0"
+ 7) "limit"
+ 8) "10"
+ 9) "group_id"
+ 10) "bc7fb023-70d5-41aa-bf73-3eab1cf001c9"
+ 11) "status"
+ 12) "enabled"
+ 13) "occurred_at"
+ 14) "1693308952544612677"
+
+Whenever user is identified, users
service will generate new identify
event. This event will have the following format:
1) "1693307172168-0"
+2) 1) "operation"
+ 2) "user.identify"
+ 3) "user_id"
+ 4) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 5) "occurred_at"
+ 6) "1693307172167980303"
+
+Whenever user reset token is generated, users
service will generate new generate_reset_token
event. This event will have the following format:
1) "1693310458376-0"
+2) 1) "operation"
+ 2) "user.generate_reset_token"
+ 3) "email"
+ 4) "rodneydav@gmail.com"
+ 5) "host"
+ 6) "http://localhost"
+ 7) "occurred_at"
+ 8) "1693310458376066097"
+
+Whenever user token is issued, users
service will generate new issue_token
event. This event will have the following format:
1) "1693307171987-0"
+2) 1) "operation"
+ 2) "user.issue_token"
+ 3) "identity"
+ 4) "-small-flower@email.com"
+ 5) "occurred_at"
+ 6) "1693307171987023095"
+
+Whenever user token is refreshed, users
service will generate new refresh_token
event. This event will have the following format:
1) "1693309886622-0"
+2) 1) "operation"
+ 2) "user.refresh_token"
+ 3) "occurred_at"
+ 4) "1693309886622414715"
+
+Whenever user secret is reset, users
service will generate new reset_secret
event. This event will have the following format:
1) "1693311075789-0"
+2) 1) "operation"
+ 2) "user.update_secret"
+ 3) "updated_by"
+ 4) "34591d29-13eb-49f8-995b-e474911eeb8a"
+ 5) "name"
+ 6) "rodney"
+ 7) "created_at"
+ 8) "2023-08-29T11:59:51.456429Z"
+ 9) "status"
+ 10) "enabled"
+ 11) "occurred_at"
+ 12) "1693311075789446621"
+ 13) "updated_at"
+ 14) "2023-08-29T12:11:15.785039Z"
+ 15) "id"
+ 16) "34591d29-13eb-49f8-995b-e474911eeb8a"
+ 17) "identity"
+ 18) "rodneydav@gmail.com"
+ 19) "metadata"
+ 20) "{}"
+
+Whenever user instance is updated, users
service will generate new update
event. This event will have the following format:
1) "1693307172308-0"
+2) 1) "operation"
+ 2) "user.update"
+ 3) "updated_by"
+ 4) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 5) "id"
+ 6) "56d2a797-dcb9-4fab-baf9-7c75e707b2c0"
+ 7) "metadata"
+ 8) "{\"Update\":\"rough-leaf\"}"
+ 9) "updated_at"
+ 10) "2023-08-29T11:06:12.294444Z"
+ 11) "name"
+ 12) "fragrant-voice"
+ 13) "identity"
+ 14) "-snowy-wave@email.com"
+ 15) "created_at"
+ 16) "2023-08-29T11:06:12.032254Z"
+ 17) "status"
+ 18) "enabled"
+ 19) "occurred_at"
+ 20) "1693307172308305030"
+
+Whenever user identity is updated, users
service will generate new update_identity
event. This event will have the following format:
1) "1693307172321-0"
+2) 1) "metadata"
+ 2) "{\"Update\":\"rough-leaf\"}"
+ 3) "created_at"
+ 4) "2023-08-29T11:06:12.032254Z"
+ 5) "status"
+ 6) "enabled"
+ 7) "updated_at"
+ 8) "2023-08-29T11:06:12.310276Z"
+ 9) "updated_by"
+ 10) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 11) "id"
+ 12) "56d2a797-dcb9-4fab-baf9-7c75e707b2c0"
+ 13) "name"
+ 14) "fragrant-voice"
+ 15) "operation"
+ 16) "user.update_identity"
+ 17) "identity"
+ 18) "wandering-brook"
+ 19) "occurred_at"
+ 20) "1693307172320906479"
+
+Whenever user tags are updated, users
service will generate new update_tags
event. This event will have the following format:
1) "1693307172332-0"
+2) 1) "name"
+ 2) "fragrant-voice"
+ 3) "identity"
+ 4) "wandering-brook"
+ 5) "metadata"
+ 6) "{\"Update\":\"rough-leaf\"}"
+ 7) "status"
+ 8) "enabled"
+ 9) "updated_at"
+ 10) "2023-08-29T11:06:12.323039Z"
+ 11) "updated_by"
+ 12) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 13) "id"
+ 14) "56d2a797-dcb9-4fab-baf9-7c75e707b2c0"
+ 15) "occurred_at"
+ 16) "1693307172332766275"
+ 17) "operation"
+ 18) "user.update_tags"
+ 19) "tags"
+ 20) "[patient-thunder]"
+ 21) "created_at"
+ 22) "2023-08-29T11:06:12.032254Z"
+
+Whenever user instance changes state in the system, users
service will generate and publish new remove
event. This event will have the following format:
1) "1693307172345-0"
+2) 1) "operation"
+ 2) "user.remove"
+ 3) "id"
+ 4) "56d2a797-dcb9-4fab-baf9-7c75e707b2c0"
+ 5) "status"
+ 6) "disabled"
+ 7) "updated_at"
+ 8) "2023-08-29T11:06:12.323039Z"
+ 9) "updated_by"
+ 10) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 11) "occurred_at"
+ 12) "1693307172345419824"
+
+1) "1693307172359-0"
+2) 1) "id"
+ 2) "56d2a797-dcb9-4fab-baf9-7c75e707b2c0"
+ 3) "status"
+ 4) "enabled"
+ 5) "updated_at"
+ 6) "2023-08-29T11:06:12.323039Z"
+ 7) "updated_by"
+ 8) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 9) "occurred_at"
+ 10) "1693307172359445655"
+ 11) "operation"
+ 12) "user.remove"
+
+Whenever group is created, users
service will generate new create
event. This event will have the following format:
1) "1693307172153-0"
+2) 1) "name"
+ 2) "-fragrant-resonance"
+ 3) "metadata"
+ 4) "{}"
+ 5) "occurred_at"
+ 6) "1693307172152850138"
+ 7) "operation"
+ 8) "group.create"
+ 9) "id"
+ 10) "bc7fb023-70d5-41aa-bf73-3eab1cf001c9"
+ 11) "status"
+ 12) "enabled"
+ 13) "created_at"
+ 14) "2023-08-29T11:06:12.129484Z"
+ 15) "owner"
+ 16) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+
+As you can see from this example, every odd field represents field name while every even field represents field value. This is standard event format for Redis Streams. If you want to extract metadata
field from this event, you'll have to read it as string first and then you can deserialize it to some structured format.
Whenever group instance is updated, users
service will generate new update
event. This event will have the following format:
1) "1693307172445-0"
+2) 1) "operation"
+ 2) "group.update"
+ 3) "owner"
+ 4) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 5) "name"
+ 6) "young-paper"
+ 7) "occurred_at"
+ 8) "1693307172445370750"
+ 9) "updated_at"
+ 10) "2023-08-29T11:06:12.429555Z"
+ 11) "updated_by"
+ 12) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 13) "id"
+ 14) "bc7fb023-70d5-41aa-bf73-3eab1cf001c9"
+ 15) "metadata"
+ 16) "{\"Update\":\"spring-wood\"}"
+ 17) "created_at"
+ 18) "2023-08-29T11:06:12.129484Z"
+ 19) "status"
+ 20) "enabled"
+
+Whenever group is viewed, users
service will generate new view
event. This event will have the following format:
1) "1693307172257-0"
+2) 1) "occurred_at"
+ 2) "1693307172257041358"
+ 3) "operation"
+ 4) "group.view"
+ 5) "id"
+ 6) "bc7fb023-70d5-41aa-bf73-3eab1cf001c9"
+ 7) "owner"
+ 8) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 9) "name"
+ 10) "-fragrant-resonance"
+ 11) "metadata"
+ 12) "{}"
+ 13) "created_at"
+ 14) "2023-08-29T11:06:12.129484Z"
+ 15) "status"
+ 16) "enabled"
+
+Whenever group list is fetched, users
service will generate new list
event. This event will have the following format:
1) "1693307172264-0"
+2) 1) "occurred_at"
+ 2) "1693307172264183217"
+ 3) "operation"
+ 4) "group.list"
+ 5) "total"
+ 6) "0"
+ 7) "offset"
+ 8) "0"
+ 9) "limit"
+ 10) "10"
+ 11) "status"
+ 12) "enabled"
+
+Whenever group list by user is fetched, users
service will generate new list_by_user
event. This event will have the following format:
1) "1693308937283-0"
+2) 1) "limit"
+ 2) "10"
+ 3) "channel_id"
+ 4) "bb1a7b38-cd79-410d-aca7-e744decd7b8e"
+ 5) "status"
+ 6) "enabled"
+ 7) "occurred_at"
+ 8) "1693308937282933017"
+ 9) "operation"
+ 10) "group.list_by_user"
+ 11) "total"
+ 12) "0"
+ 13) "offset"
+ 14) "0"
+
+Whenever group instance changes state in the system, users
service will generate and publish new remove
event. This event will have the following format:
1) "1693307172460-0"
+2) 1) "updated_by"
+ 2) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 3) "occurred_at"
+ 4) "1693307172459828786"
+ 5) "operation"
+ 6) "group.remove"
+ 7) "id"
+ 8) "bc7fb023-70d5-41aa-bf73-3eab1cf001c9"
+ 9) "status"
+ 10) "disabled"
+ 11) "updated_at"
+ 12) "2023-08-29T11:06:12.429555Z"
+
+1) "1693307172473-0"
+2) 1) "id"
+ 2) "bc7fb023-70d5-41aa-bf73-3eab1cf001c9"
+ 3) "status"
+ 4) "enabled"
+ 5) "updated_at"
+ 6) "2023-08-29T11:06:12.429555Z"
+ 7) "updated_by"
+ 8) "e1b982d8-a332-4bc2-aaff-4bbaa86880fc"
+ 9) "occurred_at"
+ 10) "1693307172473661564"
+ 11) "operation"
+ 12) "group.remove"
+
+Whenever policy is authorized, users
service will generate new authorize
event. This event will have the following format:
1) "1693311470724-0"
+2) 1) "entity_type"
+ 2) "thing"
+ 3) "object"
+ 4) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 5) "actions"
+ 6) "c_list"
+ 7) "occurred_at"
+ 8) "1693311470724174126"
+ 9) "operation"
+ 10) "policies.authorize"
+
+Whenever policy is added, users
service will generate new add
event. This event will have the following format:
1) "1693311470721-0"
+2) 1) "operation"
+ 2) "policies.add"
+ 3) "owner_id"
+ 4) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 5) "subject"
+ 6) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 7) "object"
+ 8) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 9) "actions"
+ 10) "[g_add,c_list]"
+ 11) "created_at"
+ 12) "2023-08-29T12:17:50.715541Z"
+ 13) "occurred_at"
+ 14) "1693311470721394773"
+
+Whenever policy is updated, users
service will generate new update
event. This event will have the following format:
1) "1693312500101-0"
+2) 1) "updated_at"
+ 2) "2023-08-29T12:35:00.095028Z"
+ 3) "occurred_at"
+ 4) "1693312500101367995"
+ 5) "operation"
+ 6) "policies.update"
+ 7) "owner_id"
+ 8) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 9) "subject"
+ 10) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 11) "object"
+ 12) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 13) "actions"
+ 14) "[g_add,c_list]"
+ 15) "created_at"
+ 16) "2023-08-29T12:17:50.715541Z"
+
+Whenever policy is removed, users
service will generate new remove
event. This event will have the following format:
1) "1693312590631-0"
+2) 1) "occurred_at"
+ 2) "1693312590631691388"
+ 3) "operation"
+ 4) "policies.delete"
+ 5) "subject"
+ 6) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 7) "object"
+ 8) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 9) "actions"
+ 10) "[g_add,c_list]"
+
+Whenever policy list is fetched, things
service will generate new list
event. This event will have the following format:
1) "1693312633649-0"
+2) 1) "operation"
+ 2) "policies.list"
+ 3) "total"
+ 4) "0"
+ 5) "limit"
+ 6) "10"
+ 7) "offset"
+ 8) "0"
+ 9) "occurred_at"
+ 10) "1693312633649171129"
+
+For every operation that has side effects (that is changing service state) things
service will generate new event and publish it to Redis Stream called mainflux.things
. Every event has its own event ID that is automatically generated and operation
field that can have one of the following values:
thing.create
for thing creationthing.update
for thing updatething.remove
for thing change of statething.view
for thing viewthing.list
for listing thingsthing.list_by_channel
for listing things by channelthing.identify
for thing identificationchannel.create
for channel creationchannel.update
for channel updatechannel.remove
for channel change of statechannel.view
for channel viewchannel.list
for listing channelschannel.list_by_thing
for listing channels by thingpolicy.authorize
for policy authorizationpolicy.add
for policy creationpolicy.update
for policy updatepolicy.remove
for policy deletionpolicy.list
for listing policiesBy fetching and processing these events you can reconstruct things
service state. If you store some of your custom data in metadata
field, this is the perfect way to fetch it and process it. If you want to integrate through docker-compose.yml you can use mainflux-es-redis
service. Just connect to it and consume events from Redis Stream named mainflux.things
.
Whenever thing is created, things
service will generate new create
event. This event will have the following format:
1) 1) "1693311470576-0"
+2) 1) "operation"
+ 2) "thing.create"
+ 3) "id"
+ 4) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 5) "status"
+ 6) "enabled"
+ 7) "created_at"
+ 8) "2023-08-29T12:17:50.566453Z"
+ 9) "name"
+ 10) "-broken-cloud"
+ 11) "owner"
+ 12) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 13) "metadata"
+ 14) "{}"
+ 15) "occurred_at"
+ 16) "1693311470576589894"
+
+As you can see from this example, every odd field represents field name while every even field represents field value. This is standard event format for Redis Streams. If you want to extract metadata
field from this event, you'll have to read it as string first and then you can deserialize it to some structured format.
Whenever thing instance is updated, things
service will generate new update
event. This event will have the following format:
1) "1693311470669-0"
+2) 1) "operation"
+ 2) "thing.update"
+ 3) "updated_at"
+ 4) "2023-08-29T12:17:50.665752Z"
+ 5) "updated_by"
+ 6) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 7) "owner"
+ 8) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 9) "created_at"
+ 10) "2023-08-29T12:17:50.566453Z"
+ 11) "status"
+ 12) "enabled"
+ 13) "id"
+ 14) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 15) "name"
+ 16) "lingering-sea"
+ 17) "metadata"
+ 18) "{\"Update\":\"nameless-glitter\"}"
+ 19) "occurred_at"
+ 20) "1693311470669567023"
+
+Whenever thing secret is updated, things
service will generate new update_secret
event. This event will have the following format:
1) "1693311470676-0"
+2) 1) "id"
+ 2) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 3) "name"
+ 4) "lingering-sea"
+ 5) "metadata"
+ 6) "{\"Update\":\"nameless-glitter\"}"
+ 7) "status"
+ 8) "enabled"
+ 9) "occurred_at"
+ 10) "1693311470676563107"
+ 11) "operation"
+ 12) "thing.update_secret"
+ 13) "updated_at"
+ 14) "2023-08-29T12:17:50.672865Z"
+ 15) "updated_by"
+ 16) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 17) "owner"
+ 18) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 19) "created_at"
+ 20) "2023-08-29T12:17:50.566453Z"
+
+Whenever thing tags are updated, things
service will generate new update_tags
event. This event will have the following format:
1) "1693311470682-0"
+2) 1) "operation"
+ 2) "thing.update_tags"
+ 3) "owner"
+ 4) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 5) "metadata"
+ 6) "{\"Update\":\"nameless-glitter\"}"
+ 7) "status"
+ 8) "enabled"
+ 9) "occurred_at"
+ 10) "1693311470682522926"
+ 11) "updated_at"
+ 12) "2023-08-29T12:17:50.679301Z"
+ 13) "updated_by"
+ 14) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 15) "id"
+ 16) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 17) "name"
+ 18) "lingering-sea"
+ 19) "tags"
+ 20) "[morning-pine]"
+ 21) "created_at"
+ 22) "2023-08-29T12:17:50.566453Z"
+
+Whenever thing instance is removed from the system, things
service will generate and publish new remove
event. This event will have the following format:
1) "1693311470689-0"
+2) 1) "updated_by"
+ 2) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 3) "occurred_at"
+ 4) "1693311470688911826"
+ 5) "operation"
+ 6) "thing.remove"
+ 7) "id"
+ 8) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 9) "status"
+ 10) "disabled"
+ 11) "updated_at"
+ 12) "2023-08-29T12:17:50.679301Z"
+
+1) "1693311470695-0"
+2) 1) "operation"
+ 2) "thing.remove"
+ 3) "id"
+ 4) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 5) "status"
+ 6) "enabled"
+ 7) "updated_at"
+ 8) "2023-08-29T12:17:50.679301Z"
+ 9) "updated_by"
+ 10) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 11) "occurred_at"
+ 12) "1693311470695446948"
+
+Whenever thing is viewed, things
service will generate new view
event. This event will have the following format:
1) "1693311470608-0"
+2) 1) "operation"
+ 2) "thing.view"
+ 3) "id"
+ 4) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 5) "name"
+ 6) "-broken-cloud"
+ 7) "owner"
+ 8) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 9) "metadata"
+ 10) "{}"
+ 11) "created_at"
+ 12) "2023-08-29T12:17:50.566453Z"
+ 13) "status"
+ 14) "enabled"
+ 15) "occurred_at"
+ 16) "1693311470608701504"
+
+Whenever thing list is fetched, things
service will generate new list
event. This event will have the following format:
1) "1693311470613-0"
+2) 1) "occurred_at"
+ 2) "1693311470613173088"
+ 3) "operation"
+ 4) "thing.list"
+ 5) "total"
+ 6) "0"
+ 7) "offset"
+ 8) "0"
+ 9) "limit"
+ 10) "10"
+ 11) "status"
+ 12) "enabled"
+
+Whenever thing list by channel is fetched, things
service will generate new list_by_channel
event. This event will have the following format:
1) "1693312322620-0"
+2) 1) "operation"
+ 2) "thing.list_by_channel"
+ 3) "total"
+ 4) "0"
+ 5) "offset"
+ 6) "0"
+ 7) "limit"
+ 8) "10"
+ 9) "channel_id"
+ 10) "8d77099e-4911-4140-8555-7d3be65a1694"
+ 11) "status"
+ 12) "enabled"
+ 13) "occurred_at"
+ 14) "1693312322620481072"
+
+Whenever thing is identified, things
service will generate new identify
event. This event will have the following format:
1) "1693312391155-0"
+2) 1) "operation"
+ 2) "thing.identify"
+ 3) "thing_id"
+ 4) "dc82d6bf-973b-4582-9806-0230cee11c20"
+ 5) "occurred_at"
+ 6) "1693312391155123548"
+
+Whenever channel instance is created, things
service will generate and publish new create
event. This event will have the following format:
1) 1) "1693311470584-0"
+2) 1) "owner"
+ 2) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 3) "name"
+ 4) "-frosty-moon"
+ 5) "metadata"
+ 6) "{}"
+ 7) "occurred_at"
+ 8) "1693311470584416323"
+ 9) "operation"
+ 10) "channel.create"
+ 11) "id"
+ 12) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 13) "status"
+ 14) "enabled"
+ 15) "created_at"
+ 16) "2023-08-29T12:17:50.57866Z"
+
+Whenever channel instance is updated, things
service will generate and publish new update
event. This event will have the following format:
1) "1693311470701-0"
+2) 1) "updated_by"
+ 2) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 3) "owner"
+ 4) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 5) "created_at"
+ 6) "2023-08-29T12:17:50.57866Z"
+ 7) "status"
+ 8) "enabled"
+ 9) "operation"
+ 10) "channel.update"
+ 11) "updated_at"
+ 12) "2023-08-29T12:17:50.698278Z"
+ 13) "metadata"
+ 14) "{\"Update\":\"silent-hill\"}"
+ 15) "occurred_at"
+ 16) "1693311470701812291"
+ 17) "id"
+ 18) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 19) "name"
+ 20) "morning-forest"
+
+Note that update channel event will contain only those fields that were updated using update channel endpoint.
+Whenever channel instance is removed from the system, things
service will generate and publish new remove
event. This event will have the following format:
1) "1693311470708-0"
+2) 1) "status"
+ 2) "disabled"
+ 3) "updated_at"
+ 4) "2023-08-29T12:17:50.698278Z"
+ 5) "updated_by"
+ 6) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 7) "occurred_at"
+ 8) "1693311470708219296"
+ 9) "operation"
+ 10) "channel.remove"
+ 11) "id"
+ 12) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+
+1) "1693311470714-0"
+2) 1) "occurred_at"
+ 2) "1693311470714118979"
+ 3) "operation"
+ 4) "channel.remove"
+ 5) "id"
+ 6) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 7) "status"
+ 8) "enabled"
+ 9) "updated_at"
+ 10) "2023-08-29T12:17:50.698278Z"
+ 11) "updated_by"
+ 12) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+
+Whenever channel is viewed, things
service will generate new view
event. This event will have the following format:
1) "1693311470615-0"
+2) 1) "id"
+ 2) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 3) "owner"
+ 4) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 5) "name"
+ 6) "-frosty-moon"
+ 7) "metadata"
+ 8) "{}"
+ 9) "created_at"
+ 10) "2023-08-29T12:17:50.57866Z"
+ 11) "status"
+ 12) "enabled"
+ 13) "occurred_at"
+ 14) "1693311470615693019"
+ 15) "operation"
+ 16) "channel.view"
+
+Whenever channel list is fetched, things
service will generate new list
event. This event will have the following format:
1) "1693311470619-0"
+2) 1) "limit"
+ 2) "10"
+ 3) "status"
+ 4) "enabled"
+ 5) "occurred_at"
+ 6) "1693311470619194337"
+ 7) "operation"
+ 8) "channel.list"
+ 9) "total"
+ 10) "0"
+ 11) "offset"
+ 12) "0"
+
+Whenever channel list by thing is fetched, things
service will generate new list_by_thing
event. This event will have the following format:
1) "1693312299484-0"
+2) 1) "occurred_at"
+ 2) "1693312299484000183"
+ 3) "operation"
+ 4) "channel.list_by_thing"
+ 5) "total"
+ 6) "0"
+ 7) "offset"
+ 8) "0"
+ 9) "limit"
+ 10) "10"
+ 11) "thing_id"
+ 12) "dc82d6bf-973b-4582-9806-0230cee11c20"
+ 13) "status"
+ 14) "enabled"
+
+Whenever policy is authorized, things
service will generate new authorize
event. This event will have the following format:
1) "1693311470724-0"
+2) 1) "entity_type"
+ 2) "thing"
+ 3) "object"
+ 4) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 5) "actions"
+ 6) "m_read"
+ 7) "occurred_at"
+ 8) "1693311470724174126"
+ 9) "operation"
+ 10) "policies.authorize"
+
+Whenever policy is added, things
service will generate new add
event. This event will have the following format:
1) "1693311470721-0"
+2) 1) "operation"
+ 2) "policies.add"
+ 3) "owner_id"
+ 4) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 5) "subject"
+ 6) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 7) "object"
+ 8) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 9) "actions"
+ 10) "[m_write,m_read]"
+ 11) "created_at"
+ 12) "2023-08-29T12:17:50.715541Z"
+ 13) "occurred_at"
+ 14) "1693311470721394773"
+
+Whenever policy is updated, things
service will generate new update
event. This event will have the following format:
1) "1693312500101-0"
+2) 1) "updated_at"
+ 2) "2023-08-29T12:35:00.095028Z"
+ 3) "occurred_at"
+ 4) "1693312500101367995"
+ 5) "operation"
+ 6) "policies.update"
+ 7) "owner_id"
+ 8) "fe2e5de0-9900-4ac5-b364-eae0c35777fb"
+ 9) "subject"
+ 10) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 11) "object"
+ 12) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 13) "actions"
+ 14) "[m_write,m_read]"
+ 15) "created_at"
+ 16) "2023-08-29T12:17:50.715541Z"
+
+Whenever policy is removed, things
service will generate new remove
event. This event will have the following format:
1) "1693312590631-0"
+2) 1) "occurred_at"
+ 2) "1693312590631691388"
+ 3) "operation"
+ 4) "policies.delete"
+ 5) "subject"
+ 6) "12510af8-b6a7-410d-944c-9feded199d6d"
+ 7) "object"
+ 8) "8a85e2d5-e783-43ee-8bea-d6d0f8039e78"
+ 9) "actions"
+ 10) "[m_write,m_read]"
+
+Whenever policy list is fetched, things
service will generate new list
event. This event will have the following format:
1) "1693312633649-0"
+2) 1) "operation"
+ 2) "policies.list"
+ 3) "total"
+ 4) "0"
+ 5) "limit"
+ 6) "10"
+ 7) "offset"
+ 8) "0"
+ 9) "occurred_at"
+ 10) "1693312633649171129"
+
+++Note: Every one of these events will omit fields that were not used or are not +relevant for specific operation. Also, field ordering is not guaranteed, so DO NOT +rely on it.
+
Bootstrap service publishes events to Redis Stream called mainflux.bootstrap
. Every event from this service contains operation
field which indicates one of the following event types:
config.create
for configuration creation,config.update
for configuration update,config.remove
for configuration removal,thing.bootstrap
for device bootstrap,thing.state_change
for device state change,thing.update_connections
for device connection update.If you want to integrate through docker-compose.yml you can use mainflux-es-redis
service. Just connect to it and consume events from Redis Stream named mainflux.bootstrap
.
Whenever configuration is created, bootstrap
service will generate and publish new create
event. This event will have the following format:
1) "1693313286544-0"
+2) 1) "state"
+ 2) "0"
+ 3) "operation"
+ 4) "config.create"
+ 5) "name"
+ 6) "demo"
+ 7) "channels"
+ 8) "[8d77099e-4911-4140-8555-7d3be65a1694]"
+ 9) "client_cert"
+ 10) "-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----"
+ 11) "ca_cert"
+ 12) "-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----"
+ 13) "occurred_at"
+ 14) "1693313286544243035"
+ 15) "thing_id"
+ 16) "dc82d6bf-973b-4582-9806-0230cee11c20"
+ 17) "content"
+ 18) "{ \"server\": { \"address\": \"127.0.0.1\", \"port\": 8080 }, \"database\": { \"host\": \"localhost\", \"port\": 5432, \"username\": \"user\", \"password\": \"password\", \"dbname\": \"mydb\" }, \"logging\": { \"level\": \"info\", \"file\": \"app.log\" } }"
+ 19) "owner"
+ 20) "64fd20bf-e8fb-46bf-9b64-2a6572eda21b"
+ 21) "external_id"
+ 22) "209327A2FA2D47E3B05F118D769DC521"
+ 23) "client_key"
+ 24) "-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----"
+
+Whenever configuration is updated, bootstrap
service will generate and publish new update
event. This event will have the following format:
1) "1693313985263-0"
+2) 1) "state"
+ 2) "0"
+ 3) "operation"
+ 4) "config.update"
+ 5) "thing_id"
+ 6) "dc82d6bf-973b-4582-9806-0230cee11c20"
+ 7) "content"
+ 8) "{ \"server\": { \"address\": \"127.0.0.1\", \"port\": 8080 }, \"database\": { \"host\": \"localhost\", \"port\": 5432, \"username\": \"user\", \"password\": \"password\", \"dbname\": \"mydb\" } }"
+ 9) "name"
+ 10) "demo"
+ 11) "occurred_at"
+ 12) "1693313985263381501"
+
+Whenever certificate is updated, bootstrap
service will generate and publish new update
event. This event will have the following format:
1) "1693313759203-0"
+2) 1) "thing_key"
+ 2) "dc82d6bf-973b-4582-9806-0230cee11c20"
+ 3) "client_cert"
+ 4) "-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----"
+ 5) "client_key"
+ 6) "-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----"
+ 7) "ca_cert"
+ 8) "-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----"
+ 9) "operation"
+ 10) "cert.update"
+ 11) "occurred_at"
+ 12) "1693313759203076553"
+
+Whenever configuration list is fetched, bootstrap
service will generate new list
event. This event will have the following format:
1) "1693339274766-0"
+2) 1) "occurred_at"
+ 2) "1693339274766130265"
+ 3) "offset"
+ 4) "0"
+ 5) "limit"
+ 6) "10"
+ 7) "operation"
+ 8) "config.list"
+
+Whenever configuration is viewed, bootstrap
service will generate new view
event. This event will have the following format:
1) 1) "1693339152105-0"
+2) 1) "thing_id"
+ 2) "74f00d13-d370-42c0-b528-04fff995275c"
+ 3) "name"
+ 4) "demo"
+ 5) "external_id"
+ 6) "FF-41-EF-BC-90-BC"
+ 7) "channels"
+ 8) "[90aae157-d47f-4d71-9a68-b000c0025ae8]"
+ 9) "client_cert"
+ 10) "-----BEGIN PRIVATE KEY-----MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDVYaZsyd76aSWZexY/OyX8hVdE+ruT3OZrE6gFSjDiaAA2Uf5/eHT1BJdR4LviooXix8vfc/g5CAN/z98zmUmAzx9lk5T4sRhJfqYQ2yDEt1tVDwD3RzL9RHXRWiZu4thk253jOpT15VFvOf5wE6mhVozFl9OetVJb4eqKbHx9RY0rMXwiBiCC2LcUtcp6rVjp4pK6VGjehA8siVX9bnRsIY776jDb/pm2n+y5G+bd1CifSdgTrr7QLKFlx0//5lyslmfUbf76kg9bZ8Qe2NdFKvcpEZ4ENxtwMrqW2i1pTExVHNpka8rhA5936qpDKu1ce+kccIbFsPRAHU5PyXfNAgMBAAECggEAAtBt4c4WcGuwlkHxp4B/3hZix0Md9DOb9QTmWLjYxN5QRRHMbyFHPEVaOuHhjc9M6r0YgD2cTsw/QjvwmqfxOI9YFP6JnsS0faD7pF9EzbNes1QmVByOnJkpi0r1aiL4baQZL0+sz+1n/IqMQ4LO4D+ETcV/LKmsM2VbCDD+wfwsVkTmgaqKtXIFQ3bOU5LjRcxCZFs81z3mYDyP4hfnlmTWOOXcf8yLqx5LGH8erCGXgrhZiN5/mhkzUpkF75Eo4qt3jVZEt+d48RnPsk0TO0rqs4j5F3d/6Dboi3UpRlHZ4vEM7MeDGoMuXTh59MzbV1e/03sY2jTtB2NVQ51pFQKBgQD0kjYorDqu5e82Orp5rRkS58nUDgq3vaxNKJq+32LuuTuNjRrM57XoyBAVnBlfTP5IOQaxjYPNxHkZhYdYREyZKx96g6FZUWLQxKO+vP+E25MXSsnP8FMkQNfgSvMCxfIyFO3aVbDUme6bIScPNCTzKVWSWTj5Zyyig9VQpoRJ5wKBgQDfWlF7krUefQEvdJFxd9IGBvlkWkGi942Hh0H6vJCzhMQO8DeHZjO4oiiCEpRmBdkLDlZs81mykmyFEpjcmv4JD23HQ9IPi0/4Bsuu3SDXF4HC5/QYldaG0behBmMmDYuaQ0NHY5rpCnpZBteYT6V6lcBm/AIKwvz+N8fY2fDCKwKBgQDfBCjQw+SrMc8FI16Br7+KhsR7UuahEBt7LIiXfvom98//TuleafhuMWjBW9ujFIFXeHDLHWFQFFXdWO7HJVi33yPQQxGxcc5q0rUCLDPQga1Kcw8+R0Z5a4uu4olgQQKOepk+HB+obkmvOfb1HTaIaWu3jRawDk4cT50H8x/0hwKBgB63eB9LhNclj+Ur3djCBsNHcELp2r8D1pX99wf5qNjXeHMpfCmF17UbsAB7d6c0RK4tkZs4OGzDkGMYtKcaNbefRJSz8g6rNRtCK/7ncF3EYNciOUKsUK2H5/4gN8CC+mEDwRvvSd2k0ECwHTRYN8TNFYHURJ+gQ1Te7QAYsPCzAoGBAMZnbAY1Q/gK11JaPE2orFb1IltDRKB2IXh5Ton0ZCqhmOhMLQ+4t7DLPUKdXlsBZa/IIm5XehBg6VajbG0zulKLzO4YHuWEduwYON+4DNQxLWhBCBauOZ7+dcGUvYkeKoySYs6hznV9mlMHe1TuhCO8zHjpvBXOrlAR8VX5BXKz-----END PRIVATE KEY-----"
+ 11) "state"
+ 12) "0"
+ 13) "operation"
+ 14) "config.view"
+ 15) "content"
+ 16) "{\"device_id\": \"12345\",\"secure_connection\": true,\"sensor_config\": {\"temperature\": true,\"humidity\": true,\"pressure\": false}}"
+ 17) "owner"
+ 18) "b2972472-c93c-408f-9b77-0f8a81ee47af"
+ 19) "occurred_at"
+ 20) "1693339152105496336"
+
+Whenever configuration is removed, bootstrap
service will generate and publish new remove
event. This event will have the following format:
1) "1693339203771-0"
+2) 1) "occurred_at"
+ 2) "1693339203771705590"
+ 3) "thing_id"
+ 4) "853f37b9-513a-41a2-a575-bbaa746961a6"
+ 5) "operation"
+ 6) "config.remove"
+
+Whenever a thing is removed, bootstrap
service will generate and publish new config.remove_handler
event. This event will have the following format:
1) 1) "1693337955655-0"
+2) 1) "config_id"
+ 2) "0198b458-573e-415a-aa05-052ddab9709d"
+ 3) "operation"
+ 4) "config.remove_handler"
+ 5) "occurred_at"
+ 6) "1693337955654969489"
+
+Whenever thing is bootstrapped, bootstrap
service will generate and publish new bootstrap
event. This event will have the following format:
1) 1) "1693339161600-0"
+2) 1) "occurred_at"
+ 2) "1693339161600369325"
+ 3) "external_id"
+ 4) "FF-41-EF-BC-90-BC"
+ 5) "success"
+ 6) "1"
+ 7) "operation"
+ 8) "thing.bootstrap"
+ 9) "thing_id"
+ 10) "74f00d13-d370-42c0-b528-04fff995275c"
+ 11) "content"
+ 12) "{\"device_id\": \"12345\",\"secure_connection\": true,\"sensor_config\": {\"temperature\": true,\"humidity\": true,\"pressure\": false}}"
+ 13) "owner"
+ 14) "b2972472-c93c-408f-9b77-0f8a81ee47af"
+ 15) "name"
+ 16) "demo"
+ 17) "channels"
+ 18) "[90aae157-d47f-4d71-9a68-b000c0025ae8]"
+ 19) "ca_cert"
+ 20) "-----BEGIN PRIVATE KEY-----MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDVYaZsyd76aSWZexY/OyX8hVdE+ruT3OZrE6gFSjDiaAA2Uf5/eHT1BJdR4LviooXix8vfc/g5CAN/z98zmUmAzx9lk5T4sRhJfqYQ2yDEt1tVDwD3RzL9RHXRWiZu4thk253jOpT15VFvOf5wE6mhVozFl9OetVJb4eqKbHx9RY0rMXwiBiCC2LcUtcp6rVjp4pK6VGjehA8siVX9bnRsIY776jDb/pm2n+y5G+bd1CifSdgTrr7QLKFlx0//5lyslmfUbf76kg9bZ8Qe2NdFKvcpEZ4ENxtwMrqW2i1pTExVHNpka8rhA5936qpDKu1ce+kccIbFsPRAHU5PyXfNAgMBAAECggEAAtBt4c4WcGuwlkHxp4B/3hZix0Md9DOb9QTmWLjYxN5QRRHMbyFHPEVaOuHhjc9M6r0YgD2cTsw/QjvwmqfxOI9YFP6JnsS0faD7pF9EzbNes1QmVByOnJkpi0r1aiL4baQZL0+sz+1n/IqMQ4LO4D+ETcV/LKmsM2VbCDD+wfwsVkTmgaqKtXIFQ3bOU5LjRcxCZFs81z3mYDyP4hfnlmTWOOXcf8yLqx5LGH8erCGXgrhZiN5/mhkzUpkF75Eo4qt3jVZEt+d48RnPsk0TO0rqs4j5F3d/6Dboi3UpRlHZ4vEM7MeDGoMuXTh59MzbV1e/03sY2jTtB2NVQ51pFQKBgQD0kjYorDqu5e82Orp5rRkS58nUDgq3vaxNKJq+32LuuTuNjRrM57XoyBAVnBlfTP5IOQaxjYPNxHkZhYdYREyZKx96g6FZUWLQxKO+vP+E25MXSsnP8FMkQNfgSvMCxfIyFO3aVbDUme6bIScPNCTzKVWSWTj5Zyyig9VQpoRJ5wKBgQDfWlF7krUefQEvdJFxd9IGBvlkWkGi942Hh0H6vJCzhMQO8DeHZjO4oiiCEpRmBdkLDlZs81mykmyFEpjcmv4JD23HQ9IPi0/4Bsuu3SDXF4HC5/QYldaG0behBmMmDYuaQ0NHY5rpCnpZBteYT6V6lcBm/AIKwvz+N8fY2fDCKwKBgQDfBCjQw+SrMc8FI16Br7+KhsR7UuahEBt7LIiXfvom98//TuleafhuMWjBW9ujFIFXeHDLHWFQFFXdWO7HJVi33yPQQxGxcc5q0rUCLDPQga1Kcw8+R0Z5a4uu4olgQQKOepk+HB+obkmvOfb1HTaIaWu3jRawDk4cT50H8x/0hwKBgB63eB9LhNclj+Ur3djCBsNHcELp2r8D1pX99wf5qNjXeHMpfCmF17UbsAB7d6c0RK4tkZs4OGzDkGMYtKcaNbefRJSz8g6rNRtCK/7ncF3EYNciOUKsUK2H5/4gN8CC+mEDwRvvSd2k0ECwHTRYN8TNFYHURJ+gQ1Te7QAYsPCzAoGBAMZnbAY1Q/gK11JaPE2orFb1IltDRKB2IXh5Ton0ZCqhmOhMLQ+4t7DLPUKdXlsBZa/IIm5XehBg6VajbG0zulKLzO4YHuWEduwYON+4DNQxLWhBCBauOZ7+dcGUvYkeKoySYs6hznV9mlMHe1TuhCO8zHjpvBXOrlAR8VX5BXKz-----END PRIVATE KEY-----"
+
+
+Whenever thing's state changes, bootstrap
service will generate and publish new change state
event. This event will have the following format:
1) "1555405294806-0"
+2) 1) "thing_id"
+ 2) "63a110d4-2b77-48d2-aa46-2582681eeb82"
+ 3) "state"
+ 4) "0"
+ 5) "timestamp"
+ 6) "1555405294"
+ 7) "operation"
+ 8) "thing.state_change"
+
+Whenever thing's list of connections is updated, bootstrap
service will generate and publish new update connections
event. This event will have the following format:
1) "1555405373360-0"
+2) 1) "operation"
+ 2) "thing.update_connections"
+ 3) "thing_id"
+ 4) "63a110d4-2b77-48d2-aa46-2582681eeb82"
+ 5) "channels"
+ 6) "ff13ca9c-7322-4c28-a25c-4fe5c7b753fc, 925461e6-edfb-4755-9242-8a57199b90a5, c3642289-501d-4974-82f2-ecccc71b2d82"
+ 7) "timestamp"
+ 8) "1555405373"
+
+Whenever channel is updated, bootstrap
service will generate and publish new update handler
event. This event will have the following format:
1) "1693339403536-0"
+2) 1) "operation"
+ 2) "channel.update_handler"
+ 3) "channel_id"
+ 4) "0e602731-36ba-4a29-adba-e5761f356158"
+ 5) "name"
+ 6) "dry-sky"
+ 7) "metadata"
+ 8) "{\"log\":\"info\"}"
+ 9) "occurred_at"
+ 10) "1693339403536636387"
+
+Whenever channel is removed, bootstrap
service will generate and publish new remove handler
event. This event will have the following format:
1) "1693339468719-0"
+2) 1) "config_id"
+ 2) "0198b458-573e-415a-aa05-052ddab9709d"
+ 3) "operation"
+ 4) "config.remove_handler"
+ 5) "occurred_at"
+ 6) "1693339468719177463"
+
+Instead of using heartbeat to know when client is connected through MQTT adapter one can fetch events from Redis Streams that MQTT adapter publishes. MQTT adapter publishes events every time client connects and disconnects to stream named mainflux.mqtt
.
Events that are coming from MQTT adapter have following fields:
+thing_id
ID of a thing that has connected to MQTT adapter,event_type
can have two possible values, connect and disconnect,instance
represents MQTT adapter instance.occurred_at
is in Epoch UNIX Time Stamp format.If you want to integrate through docker-compose.yml you can use mainflux-es-redis
service. Just connect to it and consume events from Redis Stream named mainflux.mqtt
.
Example of connect event:
+1) 1) "1693312937469-0"
+2) 1) "thing_id"
+ 1) "76a58221-e319-492a-be3e-b3d15631e92a"
+ 2) "event_type"
+ 3) "connect"
+ 4) "instance"
+ 5) ""
+ 6) "occurred_at"
+ 7) "1693312937469719069"
+
+Example of disconnect event:
+1) 1) "1693312937471-0"
+2) 1) "thing_id"
+ 2) "76a58221-e319-492a-be3e-b3d15631e92a"
+ 3) "event_type"
+ 4) "disconnect"
+ 5) "instance"
+ 6) ""
+ 7) "occurred_at"
+ 8) "1693312937471064150"
+
+
+
+
+
+
+
+ Before proceeding, install the following prerequisites:
+Once everything is installed, execute the following command from project root:
+make run
+
+This will start Mainflux docker composition, which will output the logs from the containers.
+Open a new terminal from which you can interact with the running Mainflux system. The easiest way to do this is by using the Mainflux CLI, which can be downloaded as a tarball from GitHub (here we use release 0.14.0
but be sure to use the latest CLI release):
wget -O- https://github.com/mainflux/mainflux/releases/download/0.14.0/mainflux-cli_0.14.0_linux-amd64.tar.gz | tar xvz -C $GOBIN
+
+++Make sure that
+$GOBIN
is added to your$PATH
so thatmainflux-cli
command can be accessible system-wide
Build mainflux-cli
if the pre-built CLI is not compatible with your OS, i.e MacOS. Please see the CLI for further details.
Once installed, you can use the CLI to quick-provision the system for testing:
+mainflux-cli provision test
+
+This command actually creates a temporary testing user, logs it in, then creates two things and two channels on behalf of this user. +This quickly provisions a Mainflux system with one simple testing scenario.
+You can read more about system provisioning in the dedicated Provisioning chapter
+Output of the command follows this pattern:
+{
+ "created_at": "2023-04-04T08:02:47.686337Z",
+ "credentials": {
+ "identity": "crazy_feistel@email.com",
+ "secret": "12345678"
+ },
+ "id": "0216df07-8f08-40ef-ba91-ff0e700f387a",
+ "name": "crazy_feistel",
+ "status": "enabled",
+ "updated_at": "2023-04-04T08:02:47.686337Z"
+}
+
+
+{
+ "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw",
+ "access_type": "Bearer",
+ "refresh_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA2ODE3NjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJyZWZyZXNoIn0.3xcrkIBbi2a8firNHtnK6I8sBBOgrQ6XBa3x7cybKc6omOuqrkkNjXGjKU9tgShvjpfCWT48AR1VqO_VxJxL8g"
+}
+
+
+[
+ {
+ "created_at": "2023-04-04T08:02:47.81865461Z",
+ "credentials": {
+ "secret": "fc9473d8-6756-4fcc-968f-ea43cd0b803b"
+ },
+ "id": "5d5e593b-7629-4cc3-bebc-b20d8ab9dbef",
+ "name": "d0",
+ "owner": "0216df07-8f08-40ef-ba91-ff0e700f387a",
+ "status": "enabled",
+ "updated_at": "2023-04-04T08:02:47.81865461Z"
+ },
+ {
+ "created_at": "2023-04-04T08:02:47.818661382Z",
+ "credentials": {
+ "secret": "56a4b1bd-9750-42b3-a3cb-cf5ee2b86fe4"
+ },
+ "id": "45048a8e-c602-4e91-9556-a9d3af6617fb",
+ "name": "d1",
+ "owner": "0216df07-8f08-40ef-ba91-ff0e700f387a",
+ "status": "enabled",
+ "updated_at": "2023-04-04T08:02:47.818661382Z"
+ }
+]
+
+
+[
+ {
+ "created_at": "2023-04-04T08:02:47.857619Z",
+ "id": "a31e16f8-343c-4366-8b4f-c95e190937f4",
+ "name": "c0",
+ "owner_id": "0216df07-8f08-40ef-ba91-ff0e700f387a",
+ "status": "enabled",
+ "updated_at": "2023-04-04T08:02:47.857619Z"
+ },
+ {
+ "created_at": "2023-04-04T08:02:47.867336Z",
+ "id": "e20ad0bb-c490-47dd-9366-fb8ffd56c5dc",
+ "name": "c1",
+ "owner_id": "0216df07-8f08-40ef-ba91-ff0e700f387a",
+ "status": "enabled",
+ "updated_at": "2023-04-04T08:02:47.867336Z"
+ }
+]
+
+
+In the Mainflux system terminal (where docker compose is running) you should see following logs:
+...
+mainflux-users | {"level":"info","message":"Method register_client with id 0216df07-8f08-40ef-ba91-ff0e700f387a using token took 87.335902ms to complete without errors.","ts":"2023-04-04T08:02:47.722776862Z"}
+mainflux-users | {"level":"info","message":"Method issue_token of type Bearer for client crazy_feistel@email.com took 55.342161ms to complete without errors.","ts":"2023-04-04T08:02:47.783884818Z"}
+mainflux-users | {"level":"info","message":"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 1.389463ms to complete without errors.","ts":"2023-04-04T08:02:47.817018631Z"}
+mainflux-things | {"level":"info","message":"Method create_things 2 things using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 48.137759ms to complete without errors.","ts":"2023-04-04T08:02:47.853310066Z"}
+mainflux-users | {"level":"info","message":"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 302.571µs to complete without errors.","ts":"2023-04-04T08:02:47.856820523Z"}
+mainflux-things | {"level":"info","message":"Method create_channel for 2 channels using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 15.340692ms to complete without errors.","ts":"2023-04-04T08:02:47.872089509Z"}
+mainflux-users | {"level":"info","message":"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 271.162µs to complete without errors.","ts":"2023-04-04T08:02:47.875812318Z"}
+mainflux-things | {"level":"info","message":"Method add_policy for client with id 5d5e593b-7629-4cc3-bebc-b20d8ab9dbef using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 28.632906ms to complete without errors.","ts":"2023-04-04T08:02:47.904041832Z"}
+mainflux-users | {"level":"info","message":"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 269.959µs to complete without errors.","ts":"2023-04-04T08:02:47.906989497Z"}
+mainflux-things | {"level":"info","message":"Method add_policy for client with id 5d5e593b-7629-4cc3-bebc-b20d8ab9dbef using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 6.303771ms to complete without errors.","ts":"2023-04-04T08:02:47.910594262Z"}
+mainflux-users | {"level":"info","message":"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 364.448µs to complete without errors.","ts":"2023-04-04T08:02:47.912905436Z"}
+mainflux-things | {"level":"info","message":"Method add_policy for client with id 45048a8e-c602-4e91-9556-a9d3af6617fb using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 7.73352ms to complete without errors.","ts":"2023-04-04T08:02:47.920205467Z"}
+...
+
+
+This proves that these provisioning commands were sent from the CLI to the Mainflux system.
+Once system is provisioned, a thing
can start sending messages on a channel
:
mainflux-cli messages send <channel_id> '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]' <thing_secret>
+
+For example:
+mainflux-cli messages send a31e16f8-343c-4366-8b4f-c95e190937f4 '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]' fc9473d8-6756-4fcc-968f-ea43cd0b803b
+
+In the Mainflux system terminal you should see following logs:
+...
+mainflux-things | {"level":"info","message":"Method authorize_by_key for channel with id a31e16f8-343c-4366-8b4f-c95e190937f4 by client with secret fc9473d8-6756-4fcc-968f-ea43cd0b803b took 7.048706ms to complete without errors.","ts":"2023-04-04T08:06:09.750992633Z"}
+mainflux-broker | [1] 2023/04/04 08:06:09.753072 [TRC] 192.168.144.11:60616 - cid:10 - "v1.18.0:go" - <<- [PUB channels.a31e16f8-343c-4366-8b4f-c95e190937f4 261]
+mainflux-broker | [1] 2023/04/04 08:06:09.754037 [TRC] 192.168.144.11:60616 - cid:10 - "v1.18.0:go" - <<- MSG_PAYLOAD: ["\n$a31e16f8-343c-4366-8b4f-c95e190937f4\x1a$5d5e593b-7629-4cc3-bebc-b20d8ab9dbef\"\x04http*\xa6\x01[{\"bn\":\"some-base-name:\",\"bt\":1.276020076001e+09, \"bu\":\"A\",\"bver\":5, \"n\":\"voltage\",\"u\":\"V\",\"v\":120.1}, {\"n\":\"current\",\"t\":-5,\"v\":1.2}, {\"n\":\"current\",\"t\":-4,\"v\":1.3}]0\xd9\xe6\x8b\xc9Ø\xab\xa9\x17"]
+mainflux-broker | [1] 2023/04/04 08:06:09.755550 [TRC] 192.168.144.13:58572 - cid:8 - "v1.18.0:go" - ->> [MSG channels.a31e16f8-343c-4366-8b4f-c95e190937f4 1 261]
+mainflux-http | {"level":"info","message":"Method publish to channel a31e16f8-343c-4366-8b4f-c95e190937f4 took 15.979094ms to complete without errors.","ts":"2023-04-04T08:06:09.75232571Z"}
+...
+
+This proves that messages have been correctly sent through the system via the protocol adapter (mainflux-http
).
Mainflux is modern, scalable, secure open source and patent-free IoT cloud platform written in Go.
+It accepts user and thing connections over various network protocols (i.e. HTTP, MQTT, WebSocket, CoAP), thus making a seamless bridge between them. It is used as the IoT middleware for building complex IoT solutions.
+ +Thank you for your interest in Mainflux and the desire to contribute!
+Take a look at our open issues. The good-first-issue label is specifically for issues that are great for getting started. Checkout the contribution guide to learn more about our style and conventions. Make your changes compatible to our workflow.
+Mainflux can be easily deployed on Kubernetes platform by using Helm Chart from official Mainflux DevOps GitHub repository.
+Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerised applications. Install it locally or have access to a cluster. Follow these instructions if you need more information.
+Kubectl is official Kubernetes command line client. Follow these instructions to install it.
+Regarding the cluster control with kubectl
, default config .yaml
file should be ~/.kube/config
.
Helm is the package manager for Kubernetes. Follow these instructions to install it.
+Add a stable chart repository:
+helm repo add stable https://charts.helm.sh/stable
+
+Add a bitnami chart repository:
+helm repo add bitnami https://charts.bitnami.com/bitnami
+
+Follow these instructions to install it or:
+helm install ingress-nginx ingress-nginx/ingress-nginx --version 3.26.0 --create-namespace -n ingress-nginx
+
+Get Helm charts from Mainflux DevOps GitHub repository:
+git clone https://github.com/mainflux/devops.git
+cd devops/charts/mainflux
+
+Update the on-disk dependencies to mirror Chart.yaml:
+helm dependency update
+
+If you didn't already have namespace created you should do it with:
+kubectl create namespace mf
+
+Deploying release named mainflux
in namespace named mf
is done with just:
helm install mainflux . -n mf
+
+Mainflux is now deployed on your Kubernetes.
+You can override default values while installing with --set
option. For example, if you want to specify ingress hostname and pull latest
tag of users
image:
helm install mainflux -n mf --set ingress.hostname='example.com' --set users.image.tag='latest'
+
+Or if release is already installed, you can update it:
+helm upgrade mainflux -n mf --set ingress.hostname='example.com' --set users.image.tag='latest'
+
+The following table lists the configurable parameters and their default values.
+Parameter | +Description | +Default | +
---|---|---|
defaults.logLevel | +Log level | +debug | +
defaults.image.pullPolicy | +Docker Image Pull Policy | +IfNotPresent | +
defaults.image.repository | +Docker Image Repository | +mainflux | +
defaults.image.tag | +Docker Image Tag | +0.13.0 | +
defaults.replicaCount | +Replicas of MQTT adapter, Things, Envoy and Authn | +3 | +
defaults.messageBrokerUrl | +Message broker URL, the default is NATS Url | +nats://nats:4222 | +
defaults.jaegerPort | +Jaeger port | +6831 | +
nginxInternal.mtls.tls | +TLS secret which contains the server cert/key | ++ |
nginxInternal.mtls.intermediateCrt | +Generic secret which contains the intermediate cert used to verify clients | ++ |
ingress.enabled | +Should the Nginx Ingress be created | +true | +
ingress.hostname | +Hostname for the Nginx Ingress | ++ |
ingress.tls.hostname | +Hostname of the Nginx Ingress certificate | ++ |
ingress.tls.secret | +TLS secret for the Nginx Ingress | ++ |
messageBroker.maxPayload | +Maximum payload size in bytes that the Message Broker server, if it is NATS, server will accept | +268435456 | +
messageBroker.replicaCount | +Message Broker replicas | +3 | +
users.dbPort | +Users service DB port | +5432 | +
users.httpPort | +Users service HTTP port | +9000 | +
things.dbPort | +Things service DB port | +5432 | +
things.httpPort | +Things service HTTP port | +9001 | +
things.authGrpcPort | +Things service Auth gRPC port | +7000 | +
things.authHttpPort | +Things service Auth HTTP port | +9002 | +
things.redisESPort | +Things service Redis Event Store port | +6379 | +
things.redisCachePort | +Things service Redis Auth Cache port | +6379 | +
adapter_http.httpPort | +HTTP adapter port | +8185 | +
mqtt.proxy.mqttPort | +MQTT adapter proxy port | +1884 | +
mqtt.proxy.wsPort | +MQTT adapter proxy WS port | +8081 | +
mqtt.broker.mqttPort | +MQTT adapter broker port | +1883 | +
mqtt.broker.wsPort | +MQTT adapter broker WS port | +8080 | +
mqtt.broker.persistentVolume.size | +MQTT adapter broker data Persistent Volume size | +5Gi | +
mqtt.redisESPort | +MQTT adapter Event Store port | +6379 | +
mqtt.redisCachePort | +MQTT adapter Redis Auth Cache port | +6379 | +
adapter_coap.udpPort | +CoAP adapter UDP port | +5683 | +
ui.port | +UI port | +3000 | +
bootstrap.enabled | +Enable bootstrap service | +false | +
bootstrap.dbPort | +Bootstrap service DB port | +5432 | +
bootstrap.httpPort | +Bootstrap service HTTP port | +9013 | +
bootstrap.redisESPort | +Bootstrap service Redis Event Store port | +6379 | +
influxdb.enabled | +Enable InfluxDB reader & writer | +false | +
influxdb.dbPort | +InfluxDB port | +8086 | +
influxdb.writer.httpPort | +InfluxDB writer HTTP port | +9006 | +
influxdb.reader.httpPort | +InfluxDB reader HTTP port | +9005 | +
adapter_opcua.enabled | +Enable OPC-UA adapter | +false | +
adapter_opcua.httpPort | +OPC-UA adapter HTTP port | +8188 | +
adapter_opcua.redisRouteMapPort | +OPC-UA adapter Redis Auth Cache port | +6379 | +
adapter_lora.enabled | +Enable LoRa adapter | +false | +
adapter_lora.httpPort | +LoRa adapter HTTP port | +8187 | +
adapter_lora.redisRouteMapPort | +LoRa adapter Redis Auth Cache port | +6379 | +
twins.enabled | +Enable twins service | +false | +
twins.dbPort | +Twins service DB port | +27017 | +
twins.httpPort | +Twins service HTTP port | +9021 | +
twins.redisCachePort | +Twins service Redis Cache port | +6379 | +
All Mainflux services (both core and add-ons) can have their logLevel
, image.pullPolicy
, image.repository
and image.tag
overridden.
Mainflux Core is a minimalistic set of required Mainflux services. They are all installed by default:
+Mainflux Add-ons are optional services that are disabled by default. Find in Configuration table parameters for enabling them, i.e. to enable influxdb reader & writer you should run helm install
with --set influxdb=true
.
+List of add-ons services in charts:
By default scale of MQTT adapter, Things, Envoy, Authn and the Message Broker will be set to 3. It's recommended that you set this values to number of your nodes in Kubernetes cluster, i.e. --set defaults.replicaCount=3 --set messageBroker.replicaCount=3
To send MQTT messages to your host on ports 1883
and 8883
some additional steps are required in configuring NGINX Ingress Controller.
NGINX Ingress Controller uses ConfigMap to expose TCP and UDP services. That ConfigMaps are included in helm chart in ingress.yaml file assuming that location of ConfigMaps should be ingress-nginx/tcp-services
and ingress-nginx/udp-services
. These locations was set with --tcp-services-configmap
and --udp-services-configmap
flags and you can check it in deployment of Ingress Controller or add it there in args section for nginx-ingress-controller if it's not already specified. This is explained in NGINX Ingress documentation
Also, these three ports need to be exposed in the Service defined for the Ingress. You can do that with command that edit your service:
+kubectl edit svc -n ingress-nginx nginx-ingress-ingress-nginx-controller
and add in spec->ports:
+- name: mqtt
+ port: 1883
+ protocol: TCP
+ targetPort: 1883
+- name: mqtts
+ port: 8883
+ protocol: TCP
+ targetPort: 8883
+- name: coap
+ port: 5683
+ protocol: UDP
+ targetPort: 5683
+
+For testing purposes you can generate certificates as explained in detail in authentication chapter of this document. So, you can use this script and after replacing all localhost
with your hostname, run:
make ca
+make server_cert
+make thing_cert KEY=<thing_secret>
+
+you should get in certs
folder these certificates that we will use for setting up TLS and mTLS:
ca.crt
+ca.key
+ca.srl
+mainflux-server.crt
+mainflux-server.key
+thing.crt
+thing.key
+
+Create kubernetes secrets using those certificates with running commands from secrets script. In this example secrets are created in mf
namespace:
kubectl -n mf create secret tls mainflux-server --key mainflux-server.key --cert mainflux-server.crt
+
+kubectl -n mf create secret generic ca --from-file=ca.crt
+
+You can check if they are succesfully created:
+kubectl get secrets -n mf
+
+And now set ingress.hostname, ingress.tls.hostname to your hostname and ingress.tls.secret to mainflux-server
and after helm update you have secured ingress with TLS certificate.
For mTLS you need to set nginx_internal.mtls.tls="mainflux-server"
and nginx_internal.mtls.intermediate_crt="ca"
.
Now you can test sending mqtt message with this parameters:
+mosquitto_pub -d -L mqtts://<thing_id>:<thing_secret>@example.com:8883/channels/<channel_id>/messages --cert thing.crt --key thing.key --cafile ca.crt -m "test-message"
+
+
+
+
+
+
+
+ Bridging with LoRaWAN Networks can be done over the lora-adapter. This service sits between Mainflux and LoRa Server and just forwards the messages from one system to another via MQTT protocol, using the adequate MQTT topics and in the good message format (JSON and SenML), i.e. respecting the APIs of both systems.
+LoRa Server is used for connectivity layer. Specially for the LoRa Gateway Bridge service, which abstracts the SemTech packet-forwarder UDP protocol into JSON over MQTT. But also for the LoRa Server service, responsible of the de-duplication and handling of uplink frames received by the gateway(s), handling of the LoRaWAN mac-layer and scheduling of downlink data transmissions. Finally the Lora App Server services is used to interact with the system.
+Before to run the lora-adapter
you must install and run LoRa Server. First, execute the following command:
go get github.com/brocaar/loraserver-docker
+
+Once everything is installed, execute the following command from the LoRa Server project root:
+docker-compose up
+
+Troubleshouting: Mainflux and LoRa Server use their own MQTT brokers which by default occupy MQTT port 1883
. If both are ran on the same machine different ports must be used. You can fix this on Mainflux side by configuring the environment variable MF_MQTT_ADAPTER_MQTT_PORT
.
Now that both systems are running you must provision LoRa Server, which offers for integration with external services, a RESTful and gRPC API. You can do it as well over the LoRa App Server, which is good example of integration.
+network session key
and application session key
of your Device. You can generate and copy them on your device configuration or you can use your own pre generated keys and set them using the LoRa App Server UI.
+ Device connect through OTAA. Make sure that loraserver device-profile is using same release as device. If MAC version is 1.0.X, application key = app_key
and app_eui = deviceEUI
. If MAC version is 1.1 or ABP both parameters will be needed, APP_key and Network key.Once everything is running and the LoRa Server is provisioned, execute the following command from Mainflux project root to run the lora-adapter:
+docker-compose -f docker/addons/lora-adapter/docker-compose.yml up -d
+
+Troubleshouting: The lora-adapter subscribes to the LoRa Server MQTT broker and will fail if the connection is not established. You must ensure that the environment variable MF_LORA_ADAPTER_MESSAGES_URL
is propertly configured.
Remark: By defaut, MF_LORA_ADAPTER_MESSAGES_URL
is set as tcp://lora.mqtt.mainflux.io:1883
in the docker-compose.yml file of the adapter. If you run the composition without configure this variable you will start to receive messages from our demo server.
The lora-adapter use Redis database to create a route map between both systems. As in Mainflux we use Channels to connect Things, LoRa Server uses Applications to connect Devices.
+The lora-adapter uses the matadata of provision events emitted by Mainflux system to update his route map. For that, you must provision Mainflux Channels and Things with an extra metadata key in the JSON Body of the HTTP request. It must be a JSON object with key lora
which value is another JSON object. This nested JSON object should contain app_id
or dev_eui
field. In this case app_id
or dev_eui
must be an existent Lora application ID or device EUI:
Channel structure:
+{
+ "name": "<channel name>",
+ "metadata:": {
+ "lora": {
+ "app_id": "<application ID>"
+ }
+ }
+}
+
+Thing structure:
+{
+ "type": "device",
+ "name": "<thing name>",
+ "metadata:": {
+ "lora": {
+ "dev_eui": "<device EUI>"
+ }
+ }
+}
+
+To forward LoRa messages the lora-adapter subscribes to topics applications/+/devices/+
of the LoRa Server MQTT broker. It verifies the app_id
and the dev_eui
of received messages. If the mapping exists it uses corresponding Channel ID
and Thing ID
to sign and forwards the content of the LoRa message to the Mainflux message broker.
Once a channel is provisioned and thing is connected to it, it can start to publish messages on the channel. The following sections will provide an example of message publishing for each of the supported protocols.
+To publish message over channel, thing should send following request:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Content-Type: application/senml+json" -H "Authorization: Thing <thing_secret>" https://localhost/http/channels/<channel_id>/messages -d '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]'
+
+Note that if you're going to use senml message format, you should always send messages as an array.
+For more information about the HTTP messaging service API, please check out the API documentation.
+To send and receive messages over MQTT you could use Mosquitto tools, or Paho if you want to use MQTT over WebSocket.
+To publish message over channel, thing should call following command:
+mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages -h localhost -m '[{"bn":"some-base-name:","bt":1.276020076001e+09, "bu":"A","bver":5, "n":"voltage","u":"V","v":120.1}, {"n":"current","t":-5,"v":1.2}, {"n":"current","t":-4,"v":1.3}]'
+
+To subscribe to channel, thing should call following command:
+mosquitto_sub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages -h localhost
+
+If you want to use standard topic such as channels/<channel_id>/messages
with SenML content type (JSON or CBOR), you should use following topic channels/<channel_id>/messages
.
If you are using TLS to secure MQTT connection, add --cafile docker/ssl/certs/ca.crt
+to every command.
CoAP adapter implements CoAP protocol using underlying UDP and according to RFC 7252. To send and receive messages over CoAP, you can use CoAP CLI. To set the add-on, please follow the installation instructions provided here.
+Examples:
+coap-cli get channels/<channel_id>/messages/subtopic -auth <thing_secret> -o
+
+coap-cli post channels/<channel_id>/messages/subtopic -auth <thing_secret> -d "hello world"
+
+coap-cli post channels/<channel_id>/messages/subtopic -auth <thing_secret> -d "hello world" -h 0.0.0.0 -p 1234
+
+To send a message, use POST
request. To subscribe, send GET
request with Observe option (flag o
) set to false. There are two ways to unsubscribe:
GET
request with Observe option set to true.RST
message as a response to CONF
message received by the server.The most of the notifications received from the Adapter are non-confirmable. By RFC 7641:
+++Server must send a notification in a confirmable message instead of a non-confirmable message at least every 24 hours. This prevents a client that went away or is no longer interested from remaining in the list of observers indefinitely.
+
CoAP Adapter sends these notifications every 12 hours. To configure this period, please check adapter documentation If the client is no longer interested in receiving notifications, the second scenario described above can be used to unsubscribe.
+To publish and receive messages over channel using web socket, you should first send handshake request to /channels/<channel_id>/messages
path. Don't forget to send Authorization
header with thing authorization token. In order to pass message content type to WS adapter you can use Content-Type
header.
If you are not able to send custom headers in your handshake request, send them as query parameter authorization
and content-type
. Then your path should look like this /channels/<channel_id>/messages?authorization=<thing_secret>&content-type=<content-type>
.
If you are using the docker environment prepend the url with ws
. So for example /ws/channels/<channel_id>/messages?authorization=<thing_secret>&content-type=<content-type>
.
const WebSocket = require("ws");
+// do not verify self-signed certificates if you are using one
+process.env.NODE_TLS_REJECT_UNAUTHORIZED = "0";
+// c02ff576-ccd5-40f6-ba5f-c85377aad529 is an example of a thing_auth_key
+const ws = new WebSocket(
+ "ws://localhost:8186/ws/channels/1/messages?authorization=c02ff576-ccd5-40f6-ba5f-c85377aad529"
+);
+ws.on("open", () => {
+ ws.send("something");
+});
+ws.on("message", (data) => {
+ console.log(data);
+});
+ws.on("error", (e) => {
+ console.log(e);
+});
+
+package main
+
+import (
+ "log"
+ "os"
+ "os/signal"
+ "time"
+
+ "github.com/gorilla/websocket"
+)
+
+var done chan interface{}
+var interrupt chan os.Signal
+
+func receiveHandler(connection *websocket.Conn) {
+ defer close(done)
+
+ for {
+ _, msg, err := connection.ReadMessage()
+ if err != nil {
+ log.Fatal("Error in receive: ", err)
+ return
+ }
+
+ log.Printf("Received: %s\n", msg)
+ }
+}
+
+func main() {
+ done = make(chan interface{})
+ interrupt = make(chan os.Signal)
+
+ signal.Notify(interrupt, os.Interrupt)
+
+ channelId := "30315311-56ba-484d-b500-c1e08305511f"
+ thingSecret := "c02ff576-ccd5-40f6-ba5f-c85377aad529"
+
+ socketUrl := "ws://localhost:8186/channels/" + channelId + "/messages/?authorization=" + thingKey
+
+ conn, _, err := websocket.DefaultDialer.Dial(socketUrl, nil)
+ if err != nil {
+ log.Fatal("Error connecting to Websocket Server: ", err)
+ } else {
+ log.Println("Connected to the ws adapter")
+ }
+ defer conn.Close()
+
+ go receiveHandler(conn)
+
+ for {
+ select {
+
+ case <-interrupt:
+ log.Println("Interrupt occured, closing the connection...")
+ conn.Close()
+ err := conn.WriteMessage(websocket.TextMessage, []byte("closed this ws client just now"))
+ if err != nil {
+ log.Println("Error during closing websocket: ", err)
+ return
+ }
+
+ select {
+ case <-done:
+ log.Println("Receiver Channel Closed! Exiting...")
+
+ case <-time.After(time.Duration(1) * time.Second):
+ log.Println("Timeout in closing receiving channel. Exiting...")
+ }
+ return
+ }
+ }
+}
+
+Mainflux also supports MQTT-over-WS, along with pure WS protocol. this bring numerous benefits for IoT applications that are derived from the properties of MQTT - like QoS and PUB/SUB features.
+There are 2 reccomended Javascript libraries for implementing browser support for Mainflux MQTT-over-WS connectivity:
+ +As WS is an extension of HTTP protocol, Mainflux exposes it on port 8008
, so it's usage is practically transparent.
+Additionally, please notice that since same port as for HTTP is used (8008
), and extension URL /mqtt
should be used -
+i.e. connection URL should be ws://<host_addr>/mqtt
.
For quick testing you can use HiveMQ UI tool.
+Here is an example of a browser application connecting to Mainflux server and sending and receiving messages over WebSocket using MQTT.js library:
+<script src="https://unpkg.com/mqtt/dist/mqtt.min.js"></script>
+<script>
+ // Initialize a mqtt variable globally
+ console.log(mqtt)
+
+ // connection option
+ const options = {
+ clean: true, // retain session
+ connectTimeout: 4000, // Timeout period
+ // Authentication information
+ clientId: '14d6c682-fb5a-4d28-b670-ee565ab5866c',
+ username: '14d6c682-fb5a-4d28-b670-ee565ab5866c',
+ password: 'ec82f341-d4b5-4c77-ae05-34877a62428f',
+ }
+
+ var channelId = '08676a76-101d-439c-b62e-d4bb3b014337'
+ var topic = 'channels/' + channelId + '/messages'
+
+ // Connect string, and specify the connection method by the protocol
+ // ws Unencrypted WebSocket connection
+ // wss Encrypted WebSocket connection
+ const connectUrl = 'ws://localhost/mqtt'
+ const client = mqtt.connect(connectUrl, options)
+
+ client.on('reconnect', (error) => {
+ console.log('reconnecting:', error)
+ })
+
+ client.on('error', (error) => {
+ console.log('Connection failed:', error)
+ })
+
+ client.on('connect', function () {
+ console.log('client connected:' + options.clientId)
+ client.subscribe(topic, { qos: 0 })
+ client.publish(topic, 'WS connection demo!', { qos: 0, retain: false })
+ })
+
+ client.on('message', function (topic, message, packet) {
+ console.log('Received Message:= ' + message.toString() + '\nOn topic:= ' + topic)
+ })
+
+ client.on('close', function () {
+ console.log(options.clientId + ' disconnected')
+ })
+</script>
+
+N.B. Eclipse Paho lib adds sub-URL /mqtt
automaticlly, so procedure for connecting to the server can be something like this:
var loc = { hostname: "localhost", port: 8008 };
+// Create a client instance
+client = new Paho.MQTT.Client(loc.hostname, Number(loc.port), "clientId");
+// Connect the client
+client.connect({ onSuccess: onConnect });
+
+In order to use subtopics and give more meaning to your pub/sub channel, you can simply add any suffix to base /channels/<channel_id>/messages
topic.
Example subtopic publish/subscribe for bedroom temperature would be channels/<channel_id>/messages/bedroom/temperature
.
Subtopics are generic and multilevel. You can use almost any suffix with any depth.
+Topics with subtopics are propagated to Message broker in the following format channels.<channel_id>.<optional_subtopic>
.
Our example topic channels/<channel_id>/messages/bedroom/temperature
will be translated to appropriate Message Broker topic channels.<channel_id>.bedroom.temperature
.
You can use multilevel subtopics, that have multiple parts. These parts are separated by .
or /
separators.
+When you use combination of these two, have in mind that behind the scene, /
separator will be replaced with .
.
+Every empty part of subtopic will be removed. What this means is that subtopic a///b
is equivalent to a/b
.
+When you want to subscribe, you can use the default Message Broker, NATS, wildcards *
and >
. Every subtopic part can have *
or >
as it's value, but if there is any other character beside these wildcards, subtopic will be invalid. What this means is that subtopics such as a.b*c.d
will be invalid, while a.b.*.c.d
will be valid.
Authorization is done on channel level, so you only have to have access to channel in order to have access to +it's subtopics.
+Note: When using MQTT, it's recommended that you use standard MQTT wildcards +
and #
.
Mainflux supports the MQTT protocol for message exchange. MQTT is a lightweight Publish/Subscribe messaging protocol used to connect restricted devices in low bandwidth, high-latency or unreliable networks. The publish-subscribe messaging pattern requires a message broker. The broker is responsible for distributing messages to and from clients connected to the MQTT adapter.
+Mainflux supports MQTT version 3.1.1. The MQTT adapter is based on Eclipse Paho MQTT client library. The adapter is configured to use nats as the default MQTT broker, but you can use vernemq too.
+In the dev environment, docker profiles are preferred when handling different MQTT and message brokers supported by Mainflux.
+Mainflux uses two types of brokers:
+MQTT_BROKER
: Handles MQTT communication between MQTT adapters and message broker.MESSAGE_BROKER
: Manages communication between adapters and Mainflux writer services.MQTT_BROKER
can be either vernemq
or nats
.
+MESSAGE_BROKER
can be either nats
or rabbitmq
.
Each broker has a unique profile for configuration. The available profiles are:
+vernemq_nats
: Uses vernemq
as MQTT_BROKER and nats
as MESSAGE_BROKER.vernemq_rabbitmq
: Uses vernemq
as MQTT_BROKER and rabbitmq
as MESSAGE_BROKER.nats_nats
: Uses nats
as both MQTT_BROKER and MESSAGE_BROKER.nats_rabbitmq
: Uses nats
as MQTT_BROKER and rabbitmq
as MESSAGE_BROKER.The following command will run VerneMQ as an MQTT broker and Nats as a message broker:
+MF_MQTT_BROKER_TYPE=vernemq MF_BROKER_TYPE=nats make run
+
+The following command will run NATS as an MQTT broker and RabbitMQ as a message broker:
+MF_MQTT_BROKER_TYPE=nats MF_BROKER_TYPE=rabbitmq make run
+
+By default, NATS is used as an MQTT broker and RabbitMQ as a message broker.
+NATS support for MQTT and it is designed to empower users to leverage their existing IoT deployments. NATS offers significant advantages in terms of security and observability when used end-to-end. NATS server as a drop-in replacement for MQTT is compelling. This approach allows you to retain your existing IoT investments while benefiting from NATS' secure, resilient, and scalable access to your streams and services.
+To enable MQTT support on NATS, JetStream needs to be enabled. This is done by default in Mainflux. This is because persistence is necessary for sessions and retained messages, even for QoS 0 retained messages. Communication between MQTT and NATS involves creating similar NATS subscriptions when MQTT clients subscribe to topics. This ensures that the interest is registered in the NATS cluster, and messages are delivered accordingly. When MQTT publishers send messages, they are converted to NATS subjects, and matching NATS subscriptions receive the MQTT messages.
+NATS supports up to QoS 1 subscriptions, where the server retains messages until it receives the PUBACK for the corresponding packet identifier. If PUBACK is not received within the "ack_wait" interval, the message is resent. The maximum value for "max_ack_pending" is 65535.
+NATS Server persists all sessions, even if they are created with the "clean session" flag. Sessions are identified by client identifiers. If two connections attempt to use the same client identifier, the server will close the existing connection and accept the new one, reducing the flapping rate.
+NATS supports MQTT in a NATS cluster, with the replication factor automatically set based on cluster size.
+VerneMQ is a powerful MQTT publish/subscribe message broker designed to implement the OASIS industry standard MQTT protocol. It is built to take messaging and IoT applications to the next level by providing a unique set of features related to scalability, reliability, high-performance, and operational simplicity.
+Key features of VerneMQ include:
+VerneMQ is designed from the ground up to work as a distributed message broker, ensuring continued operation even in the event of node or network failures. It can easily scale both horizontally and vertically to handle large numbers of concurrent clients.
+VerneMQ uses a master-less clustering technology, which means there are no special nodes like masters or slaves to consider when adding or removing nodes, making cluster operation safe and simple. This allows MQTT clients to connect to any cluster node and receive messages from any other node. However, it acknowledges the challenges of fulfilling MQTT specification guarantees in a distributed environment, particularly during network partitions.
+Mainflux supports multiple message brokers for message exchange. Message brokers are used to distribute messages to and from clients connected to the different protocols adapters and writers. Writers, which are responsible for storing messages in the database, are connected to the message broker using wildcard subscriptions. This means that writers will receive all messages published to the message broker. Clients can subscribe to the message broker using topic and subtopic combinations. The message broker will then forward all messages published to the topic and subtopic combination to the client.
+Mainflux supports NATS, RabbitMQ and Kafka as message brokers.
+Since Mainflux supports configurable message brokers, you can use Nats with JetStream enabled as a message broker. To do so, you need to set MF_BROKER_TYPE
to nats
and set MF_NATS_URL
to the url of your nats instance. When using make
command to start Mainflux MF_BROKER_URL
is automatically set to MF_NATS_URL
.
Since Mainflux is using nats:2.9.21-alpine
docker image with the following configuration:
max_payload: 1MB
+max_connections: 1M
+port: $MF_NATS_PORT
+http_port: $MF_NATS_HTTP_PORT
+trace: true
+
+jetstream {
+ store_dir: "/data"
+ cipher: "aes"
+ key: $MF_NATS_JETSTREAM_KEY
+ max_mem: 1G
+}
+
+These are the default values but you can change them by editing the configuration file. For more information about nats configuration checkout official nats documentation. The health check endpoint is exposed on MF_NATS_HTTP_PORT
and its /healthz
path.
The main reason for using Nats with JetStream enabled is to have a distributed system with high availability and minimal dependencies. Nats is configure to run as the default message broker, but you can use any other message broker supported by Mainflux. Nats is configured to use JetStream, which is a distributed streaming platform built on top of nats. JetStream is used to store messages and to provide high availability. This makes nats to be used as the default event store, but you can use any other event store supported by Mainflux. Nats with JetStream enabled is also used as a key-value store for caching purposes. This makes nats to be used as the default cache store, but you can use any other cache store supported by Mainflux.
+This versatile architecture allows you to use nats alone for the MQTT broker, message broker, event store and cache store. This is the default configuration, but you can use any other MQTT broker, message broker, event store and cache store supported by Mainflux.
+Since Mainflux uses a configurable message broker, you can use RabbitMQ as a message broker. To do so, you need to set MF_BROKER_TYPE
to rabbitmq
and set MF_RABBITMQ_URL
to the url of your RabbitMQ instance. When using make
command to start Mainflux MF_BROKER_URL
is automatically set to MF_RABBITMQ_URL
.
Since Mainflux is using rabbitmq:3.9.20-management-alpine
docker image, the management console is available at port MF_RABBITMQ_HTTP_PORT
Mainflux has one exchange for the entire platform called messages
. This exchange is of type topic
. The exchange is durable
i.e. it will survive broker restarts and remain declared when there are no remaining bindings. The exchange does not auto-delete
when all queues have finished using it. When declaring the exchange no_wait
is set to false
which means that the broker will wait for a confirmation from the server that the exchange was successfully declared. The exchange is not internal
i.e. other exchanges can publish messages to it.
Mainflux uses topic-based routing to route messages to the appropriate queues. The routing key is in the format channels.<channel_id>.<optional_subtopic>
. A few valid routing key examples: channels.318BC587-A68B-40D3-9026-3356FA4E702C
, channels.318BC587-A68B-40D3-9026-3356FA4E702C.bedroom.temperature
.
The AMQP published message doesn't contain any headers. The message body is the payload of the message.
+When subscribing to messages from a channel, a queue is created with the name channels.<channel_id>.<optional_subtopic>
. The queue is durable
i.e. it will survive broker restarts and remain declared when there are no remaining consumers or bindings. The queue does not auto-delete
when all consumers have finished using it. The queue is not exclusive
i.e. it can be accessed in other connections. When declaring the queue we set no_wait
to false
which means that the broker waits for a confirmation from the server that the queue was successfully declared. The queue is not passive i.e. the server creates the queue if it does not exist.
The queue is then bound to the exchange with the routing key channels.<channel_id>.<optional_subtopic>
. The binding is not no-wait i.e. the broker waits for a confirmation from the server that the binding was successfully created.
Once this is done, the consumer can start consuming messages from the queue with a specific client ID. The consumer is not no-local
i.e. the server will not send messages to the connection that published them. The consumer is not exclusive
i.e. the queue can be accessed in other connections. The consumer is no-ack
i.e. the server acknowledges
+deliveries to this consumer prior to writing the delivery to the network.
When Unsubscribing from a channel, the queue is unbound from the exchange and deleted.
+For more information and examples checkout official nats.io documentation, official rabbitmq documentation, official vernemq documentation and official kafka documentation.
+ + + + + + +Bridging with an OPC-UA Server can be done over the opcua-adapter. This service sits between Mainflux and an OPC-UA Server and just forwards the messages from one system to another.
+The OPC-UA Server is used for connectivity layer. It allows various methods to read information from the OPC-UA server and its nodes. The current version of the opcua-adapter still experimental and only Browse
and Subscribe
methods are implemented. Public OPC-UA test servers are available for testing of OPC-UA clients and can be used for development and test purposes.
Execute the following command from Mainflux project root to run the opcua-adapter:
+docker-compose -f docker/addons/opcua-adapter/docker-compose.yml up -d
+
+The opcua-adapter use Redis database to create a route-map between Mainflux and an OPC-UA Server. As Mainflux use Things and Channels IDs to sign messages, OPC-UA use node ID (node namespace and node identifier combination) and server URI. The adapter route-map associate a Thing ID
with a Node ID
and a Channel ID
with a Server URI
.
The opcua-adapter uses the matadata of provision events emitted by Mainflux system to update its route map. For that, you must provision Mainflux Channels and Things with an extra metadata key in the JSON Body of the HTTP request. It must be a JSON object with key opcua
which value is another JSON object. This nested JSON object should contain node_id
or server_uri
that correspond to an existent OPC-UA Node ID
or Server URI
:
Channel structure:
+{
+ "name": "<channel name>",
+ "metadata:": {
+ "opcua": {
+ "server_uri": "<Server URI>"
+ }
+ }
+}
+
+Thing structure:
+{
+ "name": "<thing name>",
+ "metadata:": {
+ "opcua": {
+ "node_id": "<Node ID>"
+ }
+ }
+}
+
+The opcua-adapter exposes a /browse
HTTP endpoint accessible with method GET
and configurable throw HTTP query parameters server
, namespace
and identifier
. The server URI, the node namespace and the node identifier represent the parent node and are used to fetch the list of available children nodes starting from the given one. By default the root node ID (node namespace and node identifier combination) of an OPC-UA server is ns=0;i=84
. It's also the default value used by the opcua-adapter to do the browsing if only the server URI is specified in the HTTP query.
To create an OPC-UA subscription, user should connect the Thing to the Channel. This will automatically create the connection, enable the redis route-map and run a subscription to the server_uri
and node_id
defined in the Thing and Channel metadata.
To forward OPC-UA messages the opcua-adapter subscribes to the Node ID of an OPC-UA Server URI. It verifies the server_uri
and the node_id
of received messages. If the mapping exists it uses corresponding Channel ID
and Thing ID
to sign and forwards the content of the OPC-UA message to the Mainflux message broker. If the mapping or the connection between the Thing and the Channel don't exist the subscription stops.
Provisioning is a process of configuration of an IoT platform in which system operator creates and sets-up different entities used in the platform - users, groups, channels and things.
+Use the Mainflux API to create user account:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Content-Type: application/json" https://localhost/users -d '{"name": "John Doe", "credentials": {"identity": "john.doe@email.com", "secret": "12345678"}, "status": "enabled"}'
+
+Response should look like this:
+HTTP/2 201
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 08:40:39 GMT
+content-type: application/json
+content-length: 229
+location: /users/71db4bb0-591e-4f76-b766-b39ced9fc6b8
+strict-transport-security: max-age=63072000; includeSubdomains
+x-frame-options: DENY
+x-content-type-options: nosniff
+access-control-allow-origin: *
+access-control-allow-methods: *
+access-control-allow-headers: *
+
+{
+ "id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "name": "John Doe",
+ "credentials": { "identity": "john.doe@email.com" },
+ "created_at": "2023-04-04T08:40:39.319602Z",
+ "updated_at": "2023-04-04T08:40:39.319602Z",
+ "status": "enabled"
+}
+
+Note that when using official docker-compose
, all services are behind nginx
proxy and all traffic is TLS
encrypted.
In order for this user to be able to authenticate to the system, you will have to create an authorization token for them:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Content-Type: application/json" https://localhost/users/tokens/issue -d '{"identity":"john.doe@email.com", "secret":"12345678"}'
+
+Response should look like this:
+HTTP/2 201
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 08:40:58 GMT
+content-type: application/json
+content-length: 709
+strict-transport-security: max-age=63072000; includeSubdomains
+x-frame-options: DENY
+x-content-type-options: nosniff
+access-control-allow-origin: *
+access-control-allow-methods: *
+access-control-allow-headers: *
+
+{
+ "access_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA2NTE2NTgsImlhdCI6MTY4MDU5NzY1OCwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI3MWRiNGJiMC01OTFlLTRmNzYtYjc2Ni1iMzljZWQ5ZmM2YjgiLCJ0eXBlIjoiYWNjZXNzIn0.E4v79FvikIVs-eYOJAgepBX67G2Pzd9YnC-k3xkVrRQcAjHSdMx685jttr9-uuZtF1q3yIpvV-NdQJ2CG5eDtw",
+ "refresh_token": "eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA2ODQwNTgsImlhdCI6MTY4MDU5NzY1OCwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI3MWRiNGJiMC01OTFlLTRmNzYtYjc2Ni1iMzljZWQ5ZmM2YjgiLCJ0eXBlIjoicmVmcmVzaCJ9.K236Hz9nsm3dnvW6i7myu5xWcBaNFEMAIeekWkiS_X9y0sQ1LZwl997hkkj4IHFFrbn8KLfmkOfTOqVWgUREFg",
+ "access_type": "Bearer"
+}
+
+For more information about the Users service API, please check out the API documentation.
+Before proceeding, make sure that you have created a new account and obtained an authorization token. You can set your access_token
in the USER_TOKEN
environment variable:
USER_TOKEN=<access_token>
+
+++This endpoint will be depreciated in 1.0.0. It will be replaced with the bulk endpoint currently found at /things/bulk.
+
Things are created by executing request POST /things
with a JSON payload. Note that you will need user_token
in order to create things that belong to this particular user.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $USER_TOKEN" https://localhost/things -d '{"name":"weio"}'
+
+Response will contain Location
header whose value represents path to newly created thing:
HTTP/2 201
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:06:50 GMT
+content-type: application/json
+content-length: 282
+location: /things/9dd12d93-21c9-4147-92fe-769386efb6cc
+access-control-expose-headers: Location
+
+{
+ "id": "9dd12d93-21c9-4147-92fe-769386efb6cc",
+ "name": "weio",
+ "owner": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "credentials": { "secret": "551e9869-d10f-4682-8319-5a4b18073313" },
+ "created_at": "2023-04-04T09:06:50.460258649Z",
+ "updated_at": "2023-04-04T09:06:50.460258649Z",
+ "status": "enabled"
+}
+
+Multiple things can be created by executing a POST /things/bulk
request with a JSON payload. The payload should contain a JSON array of the things to be created. If there is an error any of the things, none of the things will be created.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $USER_TOKEN" https://localhost/things/bulk -d '[{"name":"weio"},{"name":"bob"}]'
+
+The response's body will contain a list of the created things.
+HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 08:42:04 GMT
+content-type: application/json
+content-length: 586
+access-control-expose-headers: Location
+
+{
+ "total": 2,
+ "things": [{
+ "id": "1b1cd38f-62cd-4f17-b47e-5ff4e97881e8",
+ "name": "weio",
+ "owner": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "credentials": { "secret": "43bd950e-0b3f-46f6-a92c-296a6a0bfe66" },
+ "created_at": "2023-04-04T08:42:04.168388927Z",
+ "updated_at": "2023-04-04T08:42:04.168388927Z",
+ "status": "enabled"
+ },
+ {
+ "id": "b594af97-9550-4b11-86e1-2b6db7e329b9",
+ "name": "bob",
+ "owner": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "credentials": { "secret": "9f89f52e-1b06-4416-8294-ae753b0c4bea" },
+ "created_at": "2023-04-04T08:42:04.168390109Z",
+ "updated_at": "2023-04-04T08:42:04.168390109Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+In order to retrieve data of provisioned things that are written in database, you can send following request:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -H "Authorization: Bearer $USER_TOKEN" https://localhost/things
+
+Notice that you will receive only those things that were provisioned by user_token
owner.
HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 08:42:27 GMT
+content-type: application/json
+content-length: 570
+access-control-expose-headers: Location
+
+{
+ "limit": 10,
+ "total": 2,
+ "things": [{
+ "id": "1b1cd38f-62cd-4f17-b47e-5ff4e97881e8",
+ "name": "weio",
+ "owner": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "credentials": { "secret": "43bd950e-0b3f-46f6-a92c-296a6a0bfe66" },
+ "created_at": "2023-04-04T08:42:04.168388Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ },
+ {
+ "id": "b594af97-9550-4b11-86e1-2b6db7e329b9",
+ "name": "bob",
+ "owner": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "credentials": { "secret": "9f89f52e-1b06-4416-8294-ae753b0c4bea" },
+ "created_at": "2023-04-04T08:42:04.16839Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+You can specify offset
and limit
parameters in order to fetch a specific subset of things. In that case, your request should look like:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H "Authorization: Bearer $USER_TOKEN" https://localhost/things?offset=0&limit=5
+
+You can specify name
and/or metadata
parameters in order to fetch specific subset of things. When specifying metadata you can specify just a part of the metadata JSON you want to match.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H "Authorization: Bearer $USER_TOKEN" https://localhost/things?offset=0&limit=5&name="weio"
+
+HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 08:43:09 GMT
+content-type: application/json
+content-length: 302
+access-control-expose-headers: Location
+
+{
+ "limit": 5,
+ "total": 1,
+ "things": [{
+ "id": "1b1cd38f-62cd-4f17-b47e-5ff4e97881e8",
+ "name": "weio",
+ "owner": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "credentials": { "secret": "43bd950e-0b3f-46f6-a92c-296a6a0bfe66" },
+ "created_at": "2023-04-04T08:42:04.168388Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }]
+}
+
+If you don't provide them, default values will be used instead: 0 for offset
and 10 for limit
. Note that limit
cannot be set to values greater than 100. Providing invalid values will be considered malformed request.
This is a special endpoint that allows you to disable a thing, soft deleting it from the database. In order to disable you own thing you can send following request:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Authorization: Bearer $USER_TOKEN" https://localhost/things/1b1cd38f-62cd-4f17-b47e-5ff4e97881e8/disable
+
+HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:00:40 GMT
+content-type: application/json
+content-length: 277
+access-control-expose-headers: Location
+
+{
+ "id": "1b1cd38f-62cd-4f17-b47e-5ff4e97881e8",
+ "name": "weio",
+ "owner": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "credentials": { "secret": "43bd950e-0b3f-46f6-a92c-296a6a0bfe66" },
+ "created_at": "2023-04-04T08:42:04.168388Z",
+ "updated_at": "2023-04-04T08:42:04.168388Z",
+ "status": "disabled"
+}
+
+++This endpoint will be depreciated in 1.0.0. It will be replaced with the bulk endpoint currently found at /channels/bulk.
+
Channels are created by executing request POST /channels
:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $USER_TOKEN" https://localhost/channels -d '{"name":"mychan"}'
+
+After sending request you should receive response with Location
header that contains path to newly created channel:
HTTP/2 201
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:18:10 GMT
+content-type: application/json
+content-length: 235
+location: /channels/0a67a8ee-eda9-408e-af83-f895096b7359
+access-control-expose-headers: Location
+
+{
+ "id": "0a67a8ee-eda9-408e-af83-f895096b7359",
+ "owner_id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "name": "mychan",
+ "created_at": "2023-04-04T09:18:10.26603Z",
+ "updated_at": "2023-04-04T09:18:10.26603Z",
+ "status": "enabled"
+}
+
+Multiple channels can be created by executing a POST /things/bulk
request with a JSON payload. The payload should contain a JSON array of the channels to be created. If there is an error any of the channels, none of the channels will be created.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $USER_TOKEN" https://localhost/channels/bulk -d '[{"name":"joe"},{"name":"betty"}]'
+
+The response's body will contain a list of the created channels.
+HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:11:16 GMT
+content-type: application/json
+content-length: 487
+access-control-expose-headers: Location
+
+{
+ "channels": [{
+ "id": "5ec1beb9-1b76-47e6-a9ef-baf9e4ae5820",
+ "owner_id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "name": "joe",
+ "created_at": "2023-04-04T09:11:16.131972Z",
+ "updated_at": "2023-04-04T09:11:16.131972Z",
+ "status": "disabled"
+ },
+ {
+ "id": "ff1316f1-d3c6-4590-8bf3-33774d79eab2",
+ "owner_id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "name": "betty",
+ "created_at": "2023-04-04T09:11:16.138881Z",
+ "updated_at": "2023-04-04T09:11:16.138881Z",
+ "status": "disabled"
+ }
+ ]
+}
+
+In order to retrieve data of provisioned channels that are written in database, you can send following request:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -H "Authorization: Bearer $USER_TOKEN" https://localhost/channels
+
+Notice that you will receive only those things that were provisioned by user_token
owner.
HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:13:48 GMT
+content-type: application/json
+content-length: 495
+access-control-expose-headers: Location
+
+{
+ "total": 2,
+ "channels": [{
+ "id": "5ec1beb9-1b76-47e6-a9ef-baf9e4ae5820",
+ "owner_id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "name": "joe",
+ "created_at": "2023-04-04T09:11:16.131972Z",
+ "updated_at": "2023-04-04T09:11:16.131972Z",
+ "status": "enabled"
+ },
+ {
+ "id": "ff1316f1-d3c6-4590-8bf3-33774d79eab2",
+ "owner_id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "name": "betty",
+ "created_at": "2023-04-04T09:11:16.138881Z",
+ "updated_at": "2023-04-04T09:11:16.138881Z",
+ "status": "enabled"
+ }
+ ]
+}
+
+You can specify offset
and limit
parameters in order to fetch specific subset of channels. In that case, your request should look like:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H "Authorization: Bearer $USER_TOKEN" https://localhost/channels?offset=0&limit=5
+
+If you don't provide them, default values will be used instead: 0 for offset
and 10 for limit
. Note that limit
cannot be set to values greater than 100. Providing invalid values will be considered malformed request.
This is a special endpoint that allows you to disable a channel, soft deleting it from the database. In order to disable you own channel you can send following request:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Authorization: Bearer $USER_TOKEN" https://localhost/channels/5ec1beb9-1b76-47e6-a9ef-baf9e4ae5820/disable
+
+HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:16:31 GMT
+content-type: application/json
+content-length: 235
+access-control-expose-headers: Location
+
+{
+ "id": "5ec1beb9-1b76-47e6-a9ef-baf9e4ae5820",
+ "owner_id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "name": "joe",
+ "created_at": "2023-04-04T09:11:16.131972Z",
+ "updated_at": "2023-04-04T09:11:16.131972Z",
+ "status": "disabled"
+}
+
+Channel can be observed as a communication group of things. Only things that are connected to the channel can send and receive messages from other things in this channel. Things that are not connected to this channel are not allowed to communicate over it. Users may also be assigned to channels, thus sharing things between users. With the necessary policies in place, users can be granted access to things that are not owned by them.
+A user who is the owner of a channel or a user that has been assigned to the channel with the required policy can connect things to the channel. This is equivalent of giving permissions to these things to communicate over given communication group.
+To connect a thing to the channel you should send following request:
+++This endpoint will be depreciated in 1.0.0. It will be replaced with the bulk endpoint found at /connect.
+
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X PUT -H "Authorization: Bearer $USER_TOKEN" https://localhost/channels/<channel_id>/things/<thing_id>
+
+HTTP/2 201
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:20:23 GMT
+content-type: application/json
+content-length: 266
+access-control-expose-headers: Location
+
+{
+ "owner_id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "subject": "b594af97-9550-4b11-86e1-2b6db7e329b9",
+ "object": "ff1316f1-d3c6-4590-8bf3-33774d79eab2",
+ "actions": ["m_write", "m_read"],
+ "created_at": "2023-04-04T09:20:23.015342Z",
+ "updated_at": "2023-04-04T09:20:23.015342Z"
+}
+
+To connect multiple things to a channel, you can send the following request:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H "Content-Type: application/json" -H "Authorization: Bearer $USER_TOKEN" https://localhost/connect -d '{"channel_ids":["<channel_id>", "<channel_id>"],"thing_ids":["<thing_id>", "<thing_id>"]}'
+
+You can observe which things are connected to specific channel:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -H "Authorization: Bearer $USER_TOKEN" https://localhost/channels/<channel_id>/things
+
+Response that you'll get should look like this:
+HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:53:21 GMT
+content-type: application/json
+content-length: 254
+access-control-expose-headers: Location
+
+{
+ "limit": 10,
+ "total": 1,
+ "things": [{
+ "id": "b594af97-9550-4b11-86e1-2b6db7e329b9",
+ "name": "bob",
+ "credentials": { "secret": "9f89f52e-1b06-4416-8294-ae753b0c4bea" },
+ "created_at": "2023-04-04T08:42:04.16839Z",
+ "updated_at": "0001-01-01T00:00:00Z",
+ "status": "enabled"
+ }]
+}
+
+You can observe to which channels is specified thing connected:
+curl -s -S -i --cacert docker/ssl/certs/ca.crt -H "Authorization: Bearer $USER_TOKEN" https://localhost/things/<thing_id>/channels
+
+Response that you'll get should look like this:
+HTTP/2 200
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:57:10 GMT
+content-type: application/json
+content-length: 261
+access-control-expose-headers: Location
+
+{
+ "total": 1,
+ "channels": [{
+ "id": "ff1316f1-d3c6-4590-8bf3-33774d79eab2",
+ "owner_id": "71db4bb0-591e-4f76-b766-b39ced9fc6b8",
+ "name": "betty",
+ "created_at": "2023-04-04T09:11:16.138881Z",
+ "updated_at": "2023-04-04T09:11:16.138881Z",
+ "status": "enabled"
+ }]
+}
+
+If you want to disconnect your thing from the channel, send following request:
+++This endpoint will be depreciated in 1.0.0. It will be replaced with the bulk endpoint found at /disconnect.
+
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X DELETE -H "Authorization: Bearer $USER_TOKEN" https://localhost/channels/<channel_id>/things/<thing_id>
+
+Response that you'll get should look like this:
+HTTP/2 204
+server: nginx/1.23.3
+date: Tue, 04 Apr 2023 09:57:53 GMT
+access-control-expose-headers: Location
+
+For more information about the Things service API, please check out the API documentation.
+Provisioning is a process of configuration of an IoT platform in which system operator creates and sets-up different entities used in the platform - users, channels and things. It is part of process of setting up IoT applications where we connect devices on edge with platform in cloud. For provisioning we can use Mainflux CLI for creating users and for each node in the edge (eg. gateway) required number of things, channels, connecting them and creating certificates if needed. Provision service is used to set up initial application configuration once user is created. Provision service creates things, channels, connections and certificates. Once user is created we can use provision to create a setup for edge node in one HTTP request instead of issuing several CLI commands.
+Provision service provides an HTTP API to interact with Mainflux.
+For gateways to communicate with Mainflux configuration is required (MQTT host, thing, channels, certificates...). Gateway will send a request to Bootstrap service providing <external_id>
and <external_key>
in HTTP request to get the configuration. To make a request to Bootstrap service you can use Agent service on a gateway.
To create bootstrap configuration you can use Bootstrap or Provision
service. Mainflux UI uses Bootstrap service for creating gateway configurations. Provision
service should provide an easy way of provisioning your gateways i.e creating bootstrap configuration and as many things and channels that your setup requires.
Also, you may use provision service to create certificates for each thing. Each service running on gateway may require more than one thing and channel for communication.
+If, for example, you are using services Agent and Export on a gateway you will need two channels for Agent
(data
and control
) and one thing for Export
.
+Additionally, if you enabled mTLS each service will need its own thing and certificate for access to Mainflux.
+Your setup could require any number of things and channels, this kind of setup we can call provision layout
.
Provision service provides a way of specifying this provision layout
and creating a setup according to that layout by serving requests on /mapping
endpoint. Provision layout is configured in config.toml.
The service is configured using the environment variables presented in the following table. Note that any unset variables will be replaced with their default values.
+By default, call to /mapping
endpoint will create one thing and two channels (control
and data
) and connect it as this is typical setup required by Agent. If there is a requirement for different provision layout we can use config file in addition to environment variables.
For the purposes of running provision as an add-on in docker composition environment variables seems more suitable. Environment variables are set in .env.
+Configuration can be specified in config.toml. Config file can specify all the settings that environment variables can configure and in addition /mapping
endpoint provision layout can be configured.
In config.toml
we can enlist an array of things and channels that we want to create and make connections between them which we call provision layout.
Things Metadata can be whatever suits your needs. Thing that has metadata with external_id
will have bootstrap configuration created, external_id
value will be populated with value from request).
+Bootstrap configuration can be fetched with Agent. For channel's metadata type
is reserved for control
and data
which we use with Agent.
Example of provision layout below
+[bootstrap]
+ [bootstrap.content]
+ [bootstrap.content.agent.edgex]
+ url = "http://localhost:48090/api/v1/"
+
+ [bootstrap.content.agent.log]
+ level = "info"
+
+ [bootstrap.content.agent.mqtt]
+ mtls = false
+ qos = 0
+ retain = false
+ skip_tls_ver = true
+ url = "localhost:1883"
+
+ [bootstrap.content.agent.server]
+ nats_url = "localhost:4222"
+ port = "9000"
+
+ [bootstrap.content.agent.heartbeat]
+ interval = "30s"
+
+ [bootstrap.content.agent.terminal]
+ session_timeout = "30s"
+
+ [bootstrap.content.export.exp]
+ log_level = "debug"
+ nats = "nats://localhost:4222"
+ port = "8172"
+ cache_url = "localhost:6379"
+ cache_pass = ""
+ cache_db = "0"
+
+ [bootstrap.content.export.mqtt]
+ ca_path = "ca.crt"
+ cert_path = "thing.crt"
+ channel = ""
+ host = "tcp://localhost:1883"
+ mtls = false
+ password = ""
+ priv_key_path = "thing.key"
+ qos = 0
+ retain = false
+ skip_tls_ver = false
+ username = ""
+
+ [[bootstrap.content.export.routes]]
+ mqtt_topic = ""
+ nats_topic = "channels"
+ subtopic = ""
+ type = "mfx"
+ workers = 10
+
+ [[bootstrap.content.export.routes]]
+ mqtt_topic = ""
+ nats_topic = "export"
+ subtopic = ""
+ type = "default"
+ workers = 10
+
+[[things]]
+ name = "thing"
+
+ [things.metadata]
+ external_id = "xxxxxx"
+
+[[channels]]
+ name = "control-channel"
+
+ [channels.metadata]
+ type = "control"
+
+[[channels]]
+ name = "data-channel"
+
+ [channels.metadata]
+ type = "data"
+
+[[channels]]
+ name = "export-channel"
+
+ [channels.metadata]
+ type = "export"
+
+[bootstrap.content]
will be marshalled and saved into content
field in bootstrap configs when request to /mappings
is made, content
field from bootstrap config is used to create Agent
and Export
configuration files upon Agent
fetching bootstrap configuration.
In order to create necessary entities provision service needs to authenticate against Mainflux.
+To provide authentication credentials to the provision service you can pass it in as an environment variable or in a config file as Mainflux user and password or as API token (that can be issued on /users/tokens/issue
endpoint of users service.
Additionally, users or API token can be passed in Authorization header, this authentication takes precedence over others.
+username
, password
- (MF_PROVISION_USER
, MF_PROVISION_PASSWORD
in .env, mf_user
, mf_pass
in config.tomlMF_PROVISION_API_KEY
in .env or config.tomlAuthorization: Bearer Token|ApiKey
- request authorization header containing users token. Check auth.Provision service can be run as a standalone or in docker composition as addon to the core docker composition.
+Standalone:
+MF_PROVISION_BS_SVC_URL=http://localhost:9013/things \
+MF_PROVISION_THINGS_LOCATION=http://localhost:9000 \
+MF_PROVISION_USERS_LOCATION=http://localhost:9002 \
+MF_PROVISION_CONFIG_FILE=docker/addons/provision/configs/config.toml \
+build/mainflux-provision
+
+Docker composition:
+docker-compose -f docker/addons/provision/docker-compose.yml up
+
+For the case that credentials or API token is passed in configuration file or environment variables, call to /mapping
endpoint doesn't require Authentication
header:
curl -s -S -X POST http://localhost:9016/mapping -H 'Content-Type: application/json' -d '{"external_id": "33:52:77:99:43", "external_key": "223334fw2"}'
+
+In the case that provision service is not deployed with credentials or API key or you want to use user other than one being set in environment (or config file):
+curl -s -S -X POST http://localhost:9016/mapping -H "Authorization: Bearer <token|api_key>" -H 'Content-Type: application/json' -d '{"external_id": "<external_id>", "external_key": "<external_key>"}'
+
+Or if you want to specify a name for thing different than in config.toml
you can specify post data as:
{
+ "name": "<name>",
+ "external_id": "<external_id>",
+ "external_key": "<external_key>"
+}
+
+Response contains created things, channels and certificates if any:
+{
+ "things": [
+ {
+ "id": "c22b0c0f-8c03-40da-a06b-37ed3a72c8d1",
+ "name": "thing",
+ "key": "007cce56-e0eb-40d6-b2b9-ed348a97d1eb",
+ "metadata": {
+ "external_id": "33:52:79:C3:43"
+ }
+ }
+ ],
+ "channels": [
+ {
+ "id": "064c680e-181b-4b58-975e-6983313a5170",
+ "name": "control-channel",
+ "metadata": {
+ "type": "control"
+ }
+ },
+ {
+ "id": "579da92d-6078-4801-a18a-dd1cfa2aa44f",
+ "name": "data-channel",
+ "metadata": {
+ "type": "data"
+ }
+ }
+ ],
+ "whitelisted": {
+ "c22b0c0f-8c03-40da-a06b-37ed3a72c8d1": true
+ }
+}
+
+Deploy Mainflux UI docker composition as it contains all the required services for provisioning to work ( certs
, bootstrap
and Mainflux core)
git clone https://github.com/mainflux/ui
+cd ui
+docker-compose -f docker/docker-compose.yml up
+
+Create user and obtain access token
+mainflux-cli -m https://mainflux.com users create john.doe@email.com 12345678
+
+# Retrieve token
+mainflux-cli -m https://mainflux.com users token john.doe@email.com 12345678
+
+created: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTY1ODU3MDUsImlhdCI6MTU5NjU0OTcwNSwiaXNzIjoibWFpbmZsdXguYXV0aG4iLCJzdWIiOiJtaXJrYXNoQGdtYWlsLmNvbSIsInR5cGUiOjB9._vq0zJzFc9tQqc8x74kpn7dXYefUtG9IB0Cb-X2KMK8
+
+Put a value of token into environment variable
+TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTY1ODU3MDUsImlhdCI6MTU5NjU0OTcwNSwiaXNzIjoibWFpbmZsdXguYXV0aG4iLCJzdWIiOiJtaXJrYXNoQGdtYWlsLmNvbSIsInR5cGUiOjB9._vq0zJzFc9tQqc8x74kpn7dXYefUtG9IB0Cb-X2KMK8
+
+Make a call to provision endpoint
+curl -s -S -X POST http://mainflux.com:9016/mapping -H "Authorization: Bearer $TOKEN" -H 'Content-Type: application/json' -d '{"name":"edge-gw", "external_id" : "gateway", "external_key":"external_key" }'
+
+To check the results you can make a call to bootstrap endpoint
+curl -s -S -X GET http://mainflux.com:9013/things/bootstrap/gateway -H "Authorization: Thing external_key" -H 'Content-Type: application/json'
+
+Or you can start Agent
with:
git clone https://github.com/mainflux/agent
+cd agent
+make
+MF_AGENT_BOOTSTRAP_ID=gateway MF_AGENT_BOOTSTRAP_KEY=external_key MF_AGENT_BOOTSTRAP_URL=http://mainflux.ccom:9013/things/bootstrap build/mainflux-agent
+
+Agent will retrieve connections parameters and connect to Mainflux cloud.
+For more information about the Provision service API, please check out the API documentation.
+ + + + + + +Mainflux is modern, scalable, secure open source and patent-free IoT cloud platform written in Go.
It accepts user and thing connections over various network protocols (i.e. HTTP, MQTT, WebSocket, CoAP), thus making a seamless bridge between them. It is used as the IoT middleware for building complex IoT solutions.
"},{"location":"#features","title":"Features","text":"Thank you for your interest in Mainflux and the desire to contribute!
Take a look at our open issues. The good-first-issue label is specifically for issues that are great for getting started. Checkout the contribution guide to learn more about our style and conventions. Make your changes compatible to our workflow.
"},{"location":"#license","title":"License","text":"Apache-2.0
"},{"location":"api/","title":"API","text":""},{"location":"api/#reference","title":"Reference","text":"API reference in the Swagger UI can be found at: https://api.mainflux.io
"},{"location":"api/#users","title":"Users","text":""},{"location":"api/#create-user","title":"Create User","text":"To start working with the Mainflux system, you need to create a user account.
Identity, which can be email-address (this must be unique as it identifies the user) and secret (password must contain at least 8 characters).
curl -sSiX POST http://localhost/users -H \"Content-Type: application/json\" [-H \"Authorization: Bearer <user_token>\"] -d @- << EOF\n{\n \"name\": \"[name]\",\n \"tags\": [\"[tag1]\", \"[tag2]\"],\n \"credentials\": {\n \"identity\": \"<user_identity>\",\n \"secret\": \"<user_secret>\"\n },\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n },\n \"status\": \"[status]\",\n \"role\": \"[role]\"\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/users -H \"Content-Type: application/json\" -d @- << EOF\n{\n \"name\": \"John Doe\",\n \"credentials\": {\n \"identity\": \"john.doe@email.com\",\n \"secret\": \"12345678\"\n }\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:45:38 GMT\nContent-Type: application/json\nContent-Length: 223\nConnection: keep-alive\nLocation: /users/4f22fa45-50ca-491b-a7c9-680a2608dc13\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"4f22fa45-50ca-491b-a7c9-680a2608dc13\",\n \"name\": \"John Doe\",\n \"credentials\": { \"identity\": \"john.doe@email.com\" },\n \"created_at\": \"2023-06-14T13:45:38.808423Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
You can also use <user_token>
so that the owner of the new user is the one identified by the <user_token>
for example:
curl -sSiX POST http://localhost/users -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\": \"John Doe\",\n \"credentials\": {\n \"identity\": \"jane.doe@email.com\",\n \"secret\": \"12345678\"\n },\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:46:47 GMT\nContent-Type: application/json\nContent-Length: 252\nConnection: keep-alive\nLocation: /users/1890c034-7ef9-4cde-83df-d78ea1d4d281\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"identity\": \"jane.doe@email.com\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#create-token","title":"Create Token","text":"To log in to the Mainflux system, you need to create a user_token
.
curl -sSiX POST http://localhost/users/tokens/issue -H \"Content-Type: application/json\" -d @- << EOF\n{\n \"identity\": \"<user_identity>\",\n \"secret\": \"<user_secret>\"\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/users/tokens/issue -H \"Content-Type: application/json\" -d @- << EOF\n{\n \"identity\": \"john.doe@email.com\",\n \"secret\": \"12345678\"\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:47:32 GMT\nContent-Type: application/json\nContent-Length: 709\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"access_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY3NTEzNTIsImlhdCI6MTY4Njc1MDQ1MiwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoiYWNjZXNzIn0.AND1sm6mN2wgUxVkDhpipCoNa87KPMghGaS5-4dU0iZaqGIUhWScrEJwOahT9ts1TZSd1qEcANTIffJ_y2Pbsg\",\n \"refresh_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY4MzY4NTIsImlhdCI6MTY4Njc1MDQ1MiwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoicmVmcmVzaCJ9.z3OWCHhNHNuvkzBqEAoLKWS6vpFLkIYXhH9cZogSCXd109-BbKVlLvYKmja-hkhaj_XDJKySDN3voiazBr_WTA\",\n \"access_type\": \"Bearer\"\n}\n
"},{"location":"api/#refresh-token","title":"Refresh Token","text":"To issue another access_token
after getting expired, you need to use a refresh_token
.
curl -sSiX POST http://localhost/users/tokens/refresh -H \"Content-Type: application/json\" -H \"Authorization: Bearer <refresh_token>\"\n
For example:
curl -sSiX POST http://localhost/users/tokens/refresh -H \"Content-Type: application/json\" -H \"Authorization: Bearer <refresh_token>\"\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:49:45 GMT\nContent-Type: application/json\nContent-Length: 709\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"access_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY3NTE0ODUsImlhdCI6MTY4Njc1MDU4NSwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoiYWNjZXNzIn0.zZcUH12x7Tlnecrc3AAFnu3xbW4wAOGifWZMnba2EnhosHWDuSN4N7s2S7OxPOrBGAG_daKvkA65mi5n1sxi9A\",\n \"refresh_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY4MzY5ODUsImlhdCI6MTY4Njc1MDU4NSwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoicmVmcmVzaCJ9.AjxJ5xlUUSjW99ECUAU19ONeCs8WlRl52Ost2qGTADxHGYBjPMqctruyoTYJbdORtL5f2RTxZsnLX_1vLKRY2A\",\n \"access_type\": \"Bearer\"\n}\n
"},{"location":"api/#get-user-profile","title":"Get User Profile","text":"You can always check the user profile that is logged-in by using the user_token
.
curl -sSiX GET http://localhost/users/profile -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/users/profile -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:51:59 GMT\nContent-Type: application/json\nContent-Length: 312\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": {\n \"identity\": \"jane.doe@email.com\"\n },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#get-user","title":"Get User","text":"You can always check the user entity by entering the user ID and user_token
.
curl -sSiX GET http://localhost/users/<user_id> -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281 -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:51:59 GMT\nContent-Type: application/json\nContent-Length: 312\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": {\n \"identity\": \"jane.doe@email.com\"\n },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#get-users","title":"Get Users","text":"You can get all users in the database by querying /users
endpoint.
curl -sSiX GET http://localhost/users -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/users -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:52:36 GMT\nContent-Type: application/json\nContent-Length: 285\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 10,\n \"total\": 1,\n \"users\": [\n {\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"identity\": \"jane.doe@email.com\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
If you want to paginate your results then use offset
, limit
, metadata
, name
, identity
, tag
, status
and visbility
as query parameters.
curl -sSiX GET http://localhost/users?[offset=<offset>]&[limit=<limit>]&[identity=<identity>]&[name=<name>]&[tag=<tag>]&[status=<status>]&[visibility=<visibility>] -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/users?offset=0&limit=5&identity=jane.doe@email.com -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:53:16 GMT\nContent-Type: application/json\nContent-Length: 284\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 5,\n \"total\": 1,\n \"users\": [\n {\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"identity\": \"jane.doe@email.com\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#update-user","title":"Update User","text":"Updating user's name and/or metadata
curl -sSiX PATCH http://localhost/users/<user_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\": \"[new_name]\",\n \"metadata\": {\n \"[key]\": \"[value]\",\n }\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281 -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\": \"Jane Doe\",\n \"metadata\": {\n \"location\": \"london\",\n }\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:54:40 GMT\nContent-Type: application/json\nContent-Length: 354\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"name\": \"Jane Doe\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"identity\": \"jane.doe@email.com\" },\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"2023-06-14T13:54:40.208005Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#update-user-tags","title":"Update User Tags","text":"Updating user's tags
curl -sSiX PATCH http://localhost/users/<user_id>/tags -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"tags\": [\n \"[tag_1]\",\n ...\n \"[tag_N]\"\n ]\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/tags -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"tags\": [\"male\", \"developer\"]\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:55:18 GMT\nContent-Type: application/json\nContent-Length: 375\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"name\": \"Jane Doe\",\n \"tags\": [\"male\", \"developer\"],\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"identity\": \"jane.doe@email.com\" },\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"2023-06-14T13:55:18.353027Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#update-user-owner","title":"Update User Owner","text":"Updating user's owner
curl -sSiX PATCH http://localhost/users/<user_id>/owner -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"owner\": \"<owner_id>\"\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/owner -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"owner\": \"532311a4-c13b-4061-b991-98dcae7a934e\"\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:56:32 GMT\nContent-Type: application/json\nContent-Length: 375\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"name\": \"Jane Doe\",\n \"tags\": [\"male\", \"developer\"],\n \"owner\": \"532311a4-c13b-4061-b991-98dcae7a934e\",\n \"credentials\": { \"identity\": \"jane.doe@email.com\" },\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"2023-06-14T13:56:32.059484Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#update-user-identity","title":"Update User Identity","text":"Updating user's identity
curl -sSiX PATCH http://localhost/users/<user_id>/identity -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"identity\": \"<user_identity>\"\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/identity -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"identity\": \"updated.jane.doe@gmail.com\"\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:59:53 GMT\nContent-Type: application/json\nContent-Length: 382\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"name\": \"Jane Doe\",\n \"tags\": [\"male\", \"developer\"],\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"identity\": \"updated.jane.doe@gmail.com\" },\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"2023-06-14T13:59:53.422595Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#change-secret","title":"Change Secret","text":"Changing the user secret can be done by calling the update secret method
curl -sSiX PATCH http://localhost/users/secret -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"old_secret\": \"<old_secret>\",\n \"new_secret\": \"<new_secret>\"\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/users/secret -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"old_secret\": \"12345678\",\n \"new_secret\": \"12345678a\"\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 14:00:35 GMT\nContent-Type: application/json\nContent-Length: 281\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"api/#enable-user","title":"Enable User","text":"Changing the user status to enabled can be done by calling the enable user method
curl -sSiX POST http://localhost/users/<user_id>/enable -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX POST http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/enable -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 14:01:25 GMT\nContent-Type: application/json\nContent-Length: 382\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"name\": \"Jane Doe\",\n \"tags\": [\"male\", \"developer\"],\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"identity\": \"updated.jane.doe@gmail.com\" },\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"2023-06-14T13:59:53.422595Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#disable-user","title":"Disable User","text":"Changing the user status to disabled can be done by calling the disable user method
curl -sSiX POST http://localhost/users/<user_id>/disable -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX POST http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/disable -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 14:01:23 GMT\nContent-Type: application/json\nContent-Length: 383\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"name\": \"Jane Doe\",\n \"tags\": [\"male\", \"developer\"],\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"identity\": \"updated.jane.doe@gmail.com\" },\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"2023-06-14T13:59:53.422595Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"disabled\"\n}\n
"},{"location":"api/#get-user-memberships","title":"Get User Memberships","text":"You can get all groups a user is assigned to by calling the get user memberships method.
If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
The user identified by the user_token
must be assigned to the same group as the user with id user_id
with c_list
action. Alternatively, the user identified by the user_token
must be the owner of the user with id user_id
.
curl -sSiX GET http://localhost/users/<user_id>/memberships -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/users/1890c034-7ef9-4cde-83df-d78ea1d4d281/memberships -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 11:22:18 GMT\nContent-Type: application/json\nContent-Length: 367\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 0,\n \"offset\": 0,\n \"memberships\": [\n {\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Data analysts\",\n \"description\": \"This group would be responsible for analyzing data collected from sensors.\",\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"2023-06-15T10:17:56.475241Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#things","title":"Things","text":""},{"location":"api/#create-thing","title":"Create Thing","text":"To create a thing, you need the thing and a user_token
curl -sSiX POST http://localhost/things -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"id\": \"[thing_id]\",\n \"name\":\"[thing_name]\",\n \"tags\": [\"[tag1]\", \"[tag2]\"],\n \"credentials\": {\n \"identity\": \"[thing-identity]\",\n \"secret\":\"[thing-secret]\"\n },\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n },\n \"status\": \"[enabled|disabled]\"\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/things -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"Temperature Sensor\"\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:04:04 GMT\nContent-Type: application/json\nContent-Length: 280\nConnection: keep-alive\nLocation: /things/48101ecd-1535-40c6-9ed8-5b1d21e371bb\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Temperature Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"c3f8c096-c60f-4375-8494-bca20a12fca7\" },\n \"created_at\": \"2023-06-15T09:04:04.292602664Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#create-thing-with-external-id","title":"Create Thing with External ID","text":"It is often the case that the user will want to integrate the existing solutions, e.g. an asset management system, with the Mainflux platform. To simplify the integration between the systems and avoid artificial cross-platform reference, such as special fields in Mainflux Things metadata, it is possible to set Mainflux Thing ID with an existing unique ID while create the Thing. This way, the user can set the existing ID as the Thing ID of a newly created Thing to keep reference between Thing and the asset that Thing represents.
The limitation is that the existing ID has to be unique in the Mainflux domain.
To create a thing with an external ID, you need to provide the ID together with thing name, and other fields as well as a user_token
For example:
curl -sSiX POST http://localhost/things -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"name\":\"Temperature Sensor\"\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:05:06 GMT\nContent-Type: application/json\nContent-Length: 280\nConnection: keep-alive\nLocation: /things/2766ae94-9a08-4418-82ce-3b91cf2ccd3e\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"name\": \"Temperature Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"65ca03bd-eb6b-420b-9d5d-46d459d4f71c\" },\n \"created_at\": \"2023-06-15T09:05:06.538170496Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#create-thing-with-external-secret","title":"Create Thing with External Secret","text":"It is often the case that the user will want to integrate the existing solutions, e.g. an asset management system, with the Mainflux platform. To simplify the integration between the systems and avoid artificial cross-platform reference, such as special fields in Mainflux Things metadata, it is possible to set Mainflux Thing secret with an existing unique secret when creating the Thing. This way, the user can set the existing secret as the Thing secret of a newly created Thing to keep reference between Thing and the asset that Thing represents. The limitation is that the existing secret has to be unique in the Mainflux domain.
To create a thing with an external secret, you need to provide the secret together with thing name, and other fields as well as a user_token
For example:
curl -sSiX POST http://localhost/things -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"Temperature Sensor\"\n \"credentials\": {\n \"secret\": \"94939159-9a08-4f17-9e4e-3b91cf2ccd3e\"\n }\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:05:06 GMT\nContent-Type: application/json\nContent-Length: 280\nConnection: keep-alive\nLocation: /things/2766ae94-9a08-4418-82ce-3b91cf2ccd3e\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"name\": \"Temperature Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"94939159-9a08-4f17-9e4e-3b91cf2ccd3e\" },\n \"created_at\": \"2023-06-15T09:05:06.538170496Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#create-things","title":"Create Things","text":"You can create multiple things at once by entering a series of things structures and a user_token
curl -sSiX POST http://localhost/things/bulk -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n[\n {\n \"id\": \"[thing_id]\",\n \"name\":\"[thing_name]\",\n \"tags\": [\"[tag1]\", \"[tag2]\"],\n \"credentials\": {\n \"identity\": \"[thing-identity]\",\n \"secret\":\"[thing-secret]\"\n },\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n },\n \"status\": \"[enabled|disabled]\"\n },\n {\n \"id\": \"[thing_id]\",\n \"name\":\"[thing_name]\",\n \"tags\": [\"[tag1]\", \"[tag2]\"],\n \"credentials\": {\n \"identity\": \"[thing-identity]\",\n \"secret\":\"[thing-secret]\"\n },\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n },\n \"status\": \"[enabled|disabled]\"\n }\n]\nEOF\n
For example:
curl -sSiX POST http://localhost/things/bulk -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n[\n {\n \"name\":\"Motion Sensor\"\n },\n {\n \"name\":\"Light Sensor\"\n }\n]\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:05:45 GMT\nContent-Type: application/json\nContent-Length: 583\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"total\": 2,\n \"things\": [\n {\n \"id\": \"19f59b2d-1e9c-43db-bc84-5432bd52a83f\",\n \"name\": \"Motion Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"941c380a-3a41-40e9-8b79-3087daa4f3a6\" },\n \"created_at\": \"2023-06-15T09:05:45.719182307Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"3709f2b0-9c73-413f-992e-7f6f9b396b0d\",\n \"name\": \"Light Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"798ee6be-311b-4640-99e4-0ccb19e0dcb9\" },\n \"created_at\": \"2023-06-15T09:05:45.719186184Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#create-things-with-external-id","title":"Create Things with external ID","text":"The same as creating a Thing with external ID the user can create multiple things at once by providing UUID v4 format unique ID in a series of things together with a user_token
For example:
curl -sSiX POST http://localhost/things/bulk -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n[\n {\n \"id\": \"eb2670ba-a2be-4ea4-83cb-111111111111\",\n \"name\":\"Motion Sensor\"\n },\n {\n \"id\": \"eb2670ba-a2be-4ea4-83cb-111111111112\",\n \"name\":\"Light Sensor\"\n }\n]\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:06:17 GMT\nContent-Type: application/json\nContent-Length: 583\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"total\": 2,\n \"things\": [\n {\n \"id\": \"eb2670ba-a2be-4ea4-83cb-111111111111\",\n \"name\": \"Motion Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"325cda17-3a52-465d-89a7-2b63c7d0e3a6\" },\n \"created_at\": \"2023-06-15T09:06:17.967825372Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"eb2670ba-a2be-4ea4-83cb-111111111112\",\n \"name\": \"Light Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"67b6cbb8-4a9e-4d32-8b9c-d7cd3352aa2b\" },\n \"created_at\": \"2023-06-15T09:06:17.967828689Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#get-thing","title":"Get Thing","text":"You can get thing entity by entering the thing ID and user_token
curl -sSiX GET http://localhost/things/<thing_id> -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:07:30 GMT\nContent-Type: application/json\nContent-Length: 277\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Temperature Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"c3f8c096-c60f-4375-8494-bca20a12fca7\" },\n \"created_at\": \"2023-06-15T09:04:04.292602Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#get-things","title":"Get Things","text":"You can get all things in the database by querying /things
endpoint.
curl -sSiX GET http://localhost/things -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/things -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:07:59 GMT\nContent-Type: application/json\nTransfer-Encoding: chunked\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 10,\n \"total\": 8,\n \"things\": [\n {\n \"id\": \"f3047c10-f2c7-4d53-b3c0-bc56c560c546\",\n \"name\": \"Humidity Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"6d11a91f-0bd8-41aa-8e1b-4c6338329c9c\" },\n \"created_at\": \"2023-06-14T12:04:12.740098Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"04b0b2d1-fdaf-4b66-96a0-740a3151db4c\",\n \"name\": \"UV Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"a1e5d77f-8903-4cef-87b1-d793a3c28de3\" },\n \"created_at\": \"2023-06-14T12:04:56.245743Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Temperature Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"c3f8c096-c60f-4375-8494-bca20a12fca7\" },\n \"created_at\": \"2023-06-15T09:04:04.292602Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"name\": \"Temperature Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"65ca03bd-eb6b-420b-9d5d-46d459d4f71c\" },\n \"created_at\": \"2023-06-15T09:05:06.53817Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"19f59b2d-1e9c-43db-bc84-5432bd52a83f\",\n \"name\": \"Motion Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"941c380a-3a41-40e9-8b79-3087daa4f3a6\" },\n \"created_at\": \"2023-06-15T09:05:45.719182Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"3709f2b0-9c73-413f-992e-7f6f9b396b0d\",\n \"name\": \"Light Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"798ee6be-311b-4640-99e4-0ccb19e0dcb9\" },\n \"created_at\": \"2023-06-15T09:05:45.719186Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"eb2670ba-a2be-4ea4-83cb-111111111111\",\n \"name\": \"Motion Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"325cda17-3a52-465d-89a7-2b63c7d0e3a6\" },\n \"created_at\": \"2023-06-15T09:06:17.967825Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"eb2670ba-a2be-4ea4-83cb-111111111112\",\n \"name\": \"Light Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"67b6cbb8-4a9e-4d32-8b9c-d7cd3352aa2b\" },\n \"created_at\": \"2023-06-15T09:06:17.967828Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, tags
and visibility
as query parameters.
curl -sSiX GET http://localhost/things?[offset=<offset>]&[limit=<limit>]&name=[name]&[status=<status>] -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/things?offset=1&limit=5&name=Light Sensor -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:08:39 GMT\nContent-Type: application/json\nContent-Length: 321\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 5,\n \"offset\": 1,\n \"total\": 2,\n \"things\": [\n {\n \"id\": \"eb2670ba-a2be-4ea4-83cb-111111111112\",\n \"name\": \"Light Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"67b6cbb8-4a9e-4d32-8b9c-d7cd3352aa2b\" },\n \"created_at\": \"2023-06-15T09:06:17.967828Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#update-thing","title":"Update Thing","text":"Updating a thing name and/or metadata
curl -sSiX PATCH http://localhost/things/<thing_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"[thing_name]\",\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n }\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"Pressure Sensor\"\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:09:12 GMT\nContent-Type: application/json\nContent-Length: 332\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Pressure Sensor\",\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"c3f8c096-c60f-4375-8494-bca20a12fca7\" },\n \"created_at\": \"2023-06-15T09:04:04.292602Z\",\n \"updated_at\": \"2023-06-15T09:09:12.267074Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#update-thing-tags","title":"Update Thing Tags","text":"Updating a thing tags
curl -sSiX PATCH http://localhost/things/<thing_id>/tags -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"tags\": [\"tag_1\", ..., \"tag_N\"]\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/tags -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"tags\": [\"sensor\", \"smart\"]\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:09:44 GMT\nContent-Type: application/json\nContent-Length: 347\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Pressure Sensor\",\n \"tags\": [\"sensor\", \"smart\"],\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"c3f8c096-c60f-4375-8494-bca20a12fca7\" },\n \"created_at\": \"2023-06-15T09:04:04.292602Z\",\n \"updated_at\": \"2023-06-15T09:09:44.766726Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#update-thing-owner","title":"Update Thing Owner","text":"Updating a thing entity
curl -sSiX PATCH http://localhost/things/<thing_id>/owner -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"owner\": \"[owner_id]\"\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/owner -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"owner\": \"f7c55a1f-dde8-4880-9796-b3a0cd05745b\"\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:09:44 GMT\nContent-Type: application/json\nContent-Length: 347\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Pressure Sensor\",\n \"tags\": [\"sensor\", \"smart\"],\n \"owner\": \"f7c55a1f-dde8-4880-9796-b3a0cd05745b\",\n \"credentials\": { \"secret\": \"c3f8c096-c60f-4375-8494-bca20a12fca7\" },\n \"created_at\": \"2023-06-15T09:04:04.292602Z\",\n \"updated_at\": \"2023-06-15T09:09:44.766726Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#update-thing-secret","title":"Update Thing Secret","text":"Updating a thing secret
curl -sSiX PATCH http://localhost/things/<thing_id>/secret -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"secret\": \"<thing_secret>\"\n}\nEOF\n
For example:
curl -sSiX PATCH http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/secret -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"secret\": \"94939159-9a08-4f17-9e4e-3b91cf2ccd3e\"\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:10:52 GMT\nContent-Type: application/json\nContent-Length: 321\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Pressure Sensor\",\n \"tags\": [\"sensor\", \"smart\"],\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"94939159-9a08-4f17-9e4e-3b91cf2ccd3e\" },\n \"created_at\": \"2023-06-15T09:04:04.292602Z\",\n \"updated_at\": \"2023-06-15T09:10:52.051497Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#enable-thing","title":"Enable Thing","text":"To enable a thing you need a thing_id
and a user_token
curl -sSiX POST http://localhost/things/<thing_id>/enable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX POST http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/enable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:11:43 GMT\nContent-Type: application/json\nContent-Length: 321\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Pressure Sensor\",\n \"tags\": [\"sensor\", \"smart\"],\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"94939159-9a08-4f17-9e4e-3b91cf2ccd3e\" },\n \"created_at\": \"2023-06-15T09:04:04.292602Z\",\n \"updated_at\": \"2023-06-15T09:10:52.051497Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#disable-thing","title":"Disable Thing","text":"To disable a thing you need a thing_id
and a user_token
curl -sSiX POST http://localhost/things/<thing_id>/disable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX POST http://localhost/things/48101ecd-1535-40c6-9ed8-5b1d21e371bb/disable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:11:38 GMT\nContent-Type: application/json\nContent-Length: 322\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Pressure Sensor\",\n \"tags\": [\"sensor\", \"smart\"],\n \"owner\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"credentials\": { \"secret\": \"94939159-9a08-4f17-9e4e-3b91cf2ccd3e\" },\n \"created_at\": \"2023-06-15T09:04:04.292602Z\",\n \"updated_at\": \"2023-06-15T09:10:52.051497Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"disabled\"\n}\n
"},{"location":"api/#channels","title":"Channels","text":""},{"location":"api/#create-channel","title":"Create Channel","text":"To create a channel, you need a user_token
curl -sSiX POST http://localhost/channels -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"id\": \"[channel_id]\",\n \"name\":\"[channel_name]\",\n \"description\":\"[channel_description]\",\n \"owner_id\": \"[owner_id]\",\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n },\n \"status\": \"[enabled|disabled]\"\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/channels -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\": \"Temperature Data\"\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:12:51 GMT\nContent-Type: application/json\nContent-Length: 218\nConnection: keep-alive\nLocation: /channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Temperature Data\",\n \"created_at\": \"2023-06-15T09:12:51.162431Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#create-channel-with-external-id","title":"Create Channel with external ID","text":"Channel is a group of things that could represent a special category in existing systems, e.g. a building level channel could represent the level of a smarting building system. For helping to keep the reference, it is possible to set an existing ID while creating the Mainflux channel. There are two limitations - the existing ID has to be in UUID V4 format and it has to be unique in the Mainflux domain.
To create a channel with external ID, the user needs to provide a UUID v4 format unique ID, and a user_token
For example:
curl -sSiX POST http://localhost/channels -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"name\": \"Humidity Data\"\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:15:11 GMT\nContent-Type: application/json\nContent-Length: 219\nConnection: keep-alive\nLocation: /channels/48101ecd-1535-40c6-9ed8-5b1d21e371bb\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Humidity Data\",\n \"created_at\": \"2023-06-15T09:15:11.477695Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#create-channels","title":"Create Channels","text":"The same as creating a channel with external ID the user can create multiple channels at once by providing UUID v4 format unique ID in a series of channels together with a user_token
curl -sSiX POST http://localhost/channels/bulk -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n[\n {\n \"id\": \"[channel_id]\",\n \"name\":\"[channel_name]\",\n \"description\":\"[channel_description]\",\n \"owner_id\": \"[owner_id]\",\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n },\n \"status\": \"[enabled|disabled]\"\n },\n {\n \"id\": \"[channel_id]\",\n \"name\":\"[channel_name]\",\n \"description\":\"[channel_description]\",\n \"owner_id\": \"[owner_id]\",\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n },\n \"status\": \"[enabled|disabled]\"\n }\n]\nEOF\n
For example:
curl -sSiX POST http://localhost/channels/bulk -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n[\n {\n \"name\":\"Light Data\"\n },\n {\n \"name\":\"Pressure Data\"\n }\n]\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:15:44 GMT\nContent-Type: application/json\nContent-Length: 450\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"channels\": [\n {\n \"id\": \"cb81bbff-850d-471f-bd74-c15d6e1a6c4e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Light Data\",\n \"created_at\": \"2023-06-15T09:15:44.154283Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"fc9bf029-b1d3-4408-8d53-fc576247a4b3\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Pressure Data\",\n \"created_at\": \"2023-06-15T09:15:44.15721Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#create-channels-with-external-id","title":"Create Channels with external ID","text":"As with things, you can create multiple channels with external ID at once
For example:
curl -sSiX POST http://localhost/channels/bulk -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n[\n {\n \"id\": \"977bbd33-5b59-4b7a-a9c3-111111111111\",\n \"name\":\"Light Data\"\n },\n {\n \"id\": \"977bbd33-5b59-4b7a-a9c3-111111111112\",\n \"name\":\"Pressure Data\"\n }\n]\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:16:16 GMT\nContent-Type: application/json\nContent-Length: 453\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"channels\": [\n {\n \"id\": \"977bbd33-5b59-4b7a-a9c3-111111111111\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Light Data\",\n \"created_at\": \"2023-06-15T09:16:16.931016Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"977bbd33-5b59-4b7a-a9c3-111111111112\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Pressure Data\",\n \"created_at\": \"2023-06-15T09:16:16.934486Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#get-channel","title":"Get Channel","text":"Get a channel entity for a logged-in user
curl -sSiX GET http://localhost/channels/<channel_id> -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8 -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:17:17 GMT\nContent-Type: application/json\nContent-Length: 218\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Temperature Data\",\n \"created_at\": \"2023-06-15T09:12:51.162431Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#get-channels","title":"Get Channels","text":"You can get all channels for a logged-in user.
If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
curl -sSiX GET http://localhost/channels -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/channels -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:17:46 GMT\nContent-Type: application/json\nContent-Length: 1754\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"total\": 8,\n \"channels\": [\n {\n \"id\": \"17129934-4f48-4163-bffe-0b7b532edc5c\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Tokyo\",\n \"created_at\": \"2023-06-14T12:10:07.950311Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Humidity Data\",\n \"created_at\": \"2023-06-15T09:15:11.477695Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"977bbd33-5b59-4b7a-a9c3-111111111111\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Light Data\",\n \"created_at\": \"2023-06-15T09:16:16.931016Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"977bbd33-5b59-4b7a-a9c3-111111111112\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Pressure Data\",\n \"created_at\": \"2023-06-15T09:16:16.934486Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Temperature Data\",\n \"created_at\": \"2023-06-15T09:12:51.162431Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"b3867a52-675d-4f05-8cd0-df5a08a63ff3\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"London\",\n \"created_at\": \"2023-06-14T12:09:34.205894Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"cb81bbff-850d-471f-bd74-c15d6e1a6c4e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Light Data\",\n \"created_at\": \"2023-06-15T09:15:44.154283Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"fc9bf029-b1d3-4408-8d53-fc576247a4b3\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Pressure Data\",\n \"created_at\": \"2023-06-15T09:15:44.15721Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#update-channel","title":"Update Channel","text":"Update channel name and/or metadata.
curl -sSiX PUT http://localhost/channels/<channel_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"[channel_name]\",\n \"description\":\"[channel_description]\",\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n }\n}\nEOF\n
For example:
curl -sSiX PUT http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8 -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"Jane Doe\",\n \"metadata\": {\n \"location\": \"london\"\n }\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:18:26 GMT\nContent-Type: application/json\nContent-Length: 296\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Jane Doe\",\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-15T09:12:51.162431Z\",\n \"updated_at\": \"2023-06-15T09:18:26.886913Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#enable-channel","title":"Enable Channel","text":"To enable a channel you need a channel_id
and a user_token
curl -sSiX POST http://localhost/channels/<channel_id>/enable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX POST http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/enable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:19:29 GMT\nContent-Type: application/json\nContent-Length: 296\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Jane Doe\",\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-15T09:12:51.162431Z\",\n \"updated_at\": \"2023-06-15T09:18:26.886913Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#disable-channel","title":"Disable Channel","text":"To disable a channel you need a channel_id
and a user_token
curl -sSiX POST http://localhost/channels/<channel_id>/disable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX POST http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/disable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:19:24 GMT\nContent-Type: application/json\nContent-Length: 297\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Jane Doe\",\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-15T09:12:51.162431Z\",\n \"updated_at\": \"2023-06-15T09:18:26.886913Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"disabled\"\n}\n
"},{"location":"api/#connect","title":"Connect","text":"Connect things to channels
actions
is optional, if not provided, the default action is m_read
and m_write
.
curl -sSiX POST http://localhost/connect -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subjects\": [\"<thing_id>\"],\n \"objects\": [\"<channel_id>\"],\n \"actions\": [\"[action]\"]\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/connect -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subjects\": [\"48101ecd-1535-40c6-9ed8-5b1d21e371bb\"],\n \"objects\": [\"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\"]\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:21:37 GMT\nContent-Type: application/json\nContent-Length: 247\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"policies\": [\n {\n \"owner_id\": \"\",\n \"subject\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"object\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"actions\": [\"m_write\", \"m_read\"],\n \"created_at\": \"0001-01-01T00:00:00Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"updated_by\": \"\"\n }\n ]\n}\n
Connect thing to channel
actions
is optional, if not provided, the default actions are m_read
and m_write
.
curl -sSiX POST http://localhost/things/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"<thing_id>\",\n \"object\": \"<channel_id>\",\n \"actions\": [\"<action>\", \"[action]\"]]\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/things/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"object\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\"\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:23:28 GMT\nContent-Type: application/json\nContent-Length: 290\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"policies\": [\n {\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"subject\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"object\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"actions\": [\"m_write\", \"m_read\"],\n \"created_at\": \"2023-06-15T09:23:28.769729Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"updated_by\": \"\"\n }\n ]\n}\n
"},{"location":"api/#disconnect","title":"Disconnect","text":"Disconnect things from channels specified by lists of IDs.
curl -sSiX POST http://localhost/disconnect -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subjects\": [\"<thing_id_1>\", \"[thing_id_2]\"],\n \"objects\": [\"<channel_id_1>\", \"[channel_id_2]\"]\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/disconnect -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subjects\": [\"48101ecd-1535-40c6-9ed8-5b1d21e371bb\"],\n \"objects\": [\"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\"]\n}\nEOF\n\nHTTP/1.1 204 No Content\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:23:07 GMT\nContent-Type: application/json\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
Disconnect thing from the channel
curl -sSiX DELETE http://localhost/things/policies/<subject_id>/<object_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX DELETE http://localhost/things/policies/48101ecd-1535-40c6-9ed8-5b1d21e371bb/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8 -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 204 No Content\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:25:23 GMT\nContent-Type: application/json\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"api/#access-by-key","title":"Access by Key","text":"Checks if thing has access to a channel
curl -sSiX POST http://localhost/channels/<channel_id>/access -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"<thing_secret>\",\n \"action\": \"m_read\" | \"m_write\",\n \"entity_type\": \"thing\"\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/access -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"48101ecd-1535-40c6-9ed8-5b1d21e371bb\",\n \"action\": \"m_read\",\n \"entity_type\": \"thing\"\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:39:26 GMT\nContent-Type: application/json\nContent-Length: 0\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"api/#identify","title":"Identify","text":"Validates thing's key and returns it's ID if key is valid
curl -sSiX POST http://localhost/identify -H \"Content-Type: application/json\" -H \"Authorization: Thing <thing_secret>\"\n
For example:
curl -sSiX POST http://localhost/identify -H \"Content-Type: application/json\" -H \"Authorization: Thing 6d11a91f-0bd8-41aa-8e1b-4c6338329c9c\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:28:16 GMT\nContent-Type: application/json\nContent-Length: 46\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{ \"id\": \"f3047c10-f2c7-4d53-b3c0-bc56c560c546\" }\n
"},{"location":"api/#messages","title":"Messages","text":""},{"location":"api/#send-messages","title":"Send Messages","text":"Sends message via HTTP protocol
curl -sSiX POST http://localhost/http/channels/<channel_id>/messages -H \"Content-Type: application/senml+json\" -H \"Authorization: Thing <thing_secret>\" -d @- << EOF\n[\n {\n \"bn\": \"<base_name>\",\n \"bt\": \"[base_time]\",\n \"bu\": \"[base_unit]\",\n \"bver\": [base_version],\n \"n\": \"<measurement_name>\",\n \"u\": \"<measurement_unit>\",\n \"v\": <measurement_value>,\n },\n {\n \"n\": \"[measurement_name]\",\n \"t\": <measurement_time>,\n \"v\": <measurement_value>,\n }\n]\nEOF\n
For example:
curl -sSiX POST http://localhost/http/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/messages -H \"Content-Type: application/senml+json\" -H \"Authorization: Thing a83b9afb-9022-4f9e-ba3d-4354a08c273a\" -d @- << EOF\n[\n {\n \"bn\": \"some-base-name:\",\n \"bt\": 1.276020076001e+09,\n \"bu\": \"A\",\n \"bver\": 5,\n \"n\": \"voltage\",\n \"u\": \"V\",\n \"v\": 120.1\n },\n {\n \"n\": \"current\",\n \"t\": -5,\n \"v\": 1.2\n },\n {\n \"n\": \"current\",\n \"t\": -4,\n \"v\": 1.3\n }\n]\nEOF\nHTTP/1.1 202 Accepted\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:40:44 GMT\nContent-Length: 0\nConnection: keep-alive\n
"},{"location":"api/#read-messages","title":"Read Messages","text":"Reads messages from database for a given channel
curl -sSiX GET http://localhost:<service_port>/channels/<channel_id>/messages?[offset=<offset>]&[limit=<limit>] -H \"Authorization: Thing <thing_secret>\"\n
For example:
curl -sSiX GET http://localhost:9009/channels/aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8/messages -H \"Authorization: Thing a83b9afb-9022-4f9e-ba3d-4354a08c273a\"\n\nHTTP/1.1 200 OK\nContent-Type: application/json\nDate: Wed, 05 Apr 2023 16:01:49 GMT\nContent-Length: 660\n\n{\n \"offset\": 0,\n \"limit\": 10,\n \"format\": \"messages\",\n \"total\": 3,\n \"messages\": [{\n \"channel\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"publisher\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"protocol\": \"http\",\n \"name\": \"some-base-name:voltage\",\n \"unit\": \"V\",\n \"time\": 1276020076.001,\n \"value\": 120.1\n },\n {\n \"channel\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"publisher\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"protocol\": \"http\",\n \"name\": \"some-base-name:current\",\n \"unit\": \"A\",\n \"time\": 1276020072.001,\n \"value\": 1.3\n },\n {\n \"channel\": \"aecf0902-816d-4e38-a5b3-a1ad9a7cf9e8\",\n \"publisher\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"protocol\": \"http\",\n \"name\": \"some-base-name:current\",\n \"unit\": \"A\",\n \"time\": 1276020071.001,\n \"value\": 1.2\n }\n ]\n}\n
"},{"location":"api/#groups","title":"Groups","text":""},{"location":"api/#create-group","title":"Create group","text":"To create a group, you need the group name and a user_token
curl -sSiX POST http://localhost/groups -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"<group_name>\",\n \"description\":\"[group_description]\",\n \"parent_id\": \"[parent_id]\",\n \"owner_id\": \"[owner_id]\",\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n },\n \"status\": \"[enabled|disabled]\"\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/groups -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\": \"Security Engineers\",\n \"description\": \"This group would be responsible for securing the platform.\"\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:41:42 GMT\nContent-Type: application/json\nContent-Length: 252\nConnection: keep-alive\nLocation: /groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Security Engineers\",\n \"description\": \"This group would be responsible for securing the platform.\",\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
When you use parent_id
make sure the parent is an already exisiting group
For example:
curl -sSiX POST http://localhost/groups -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\": \"Customer Support\",\n \"description\": \"This group would be responsible for providing support to users of the platform.\",\n \"parent_id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\"\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 09:42:34 GMT\nContent-Type: application/json\nContent-Length: 306\nConnection: keep-alive\nLocation: /groups/dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"parent_id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"name\": \"Customer Support\",\n \"description\": \"This group would be responsible for providing support to users of the platform.\",\n \"created_at\": \"2023-06-15T09:42:34.063997Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#get-group","title":"Get group","text":"Get a group entity for a logged-in user
curl -sSiX GET http://localhost/groups/<group_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 10:00:52 GMT\nContent-Type: application/json\nContent-Length: 252\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Security Engineers\",\n \"description\": \"This group would be responsible for securing the platform.\",\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#get-groups","title":"Get groups","text":"You can get all groups for a logged-in user.
If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
curl -sSiX GET http://localhost/groups -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/groups -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 10:13:50 GMT\nContent-Type: application/json\nContent-Length: 807\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 0,\n \"offset\": 0,\n \"total\": 3,\n \"groups\": [\n {\n \"id\": \"0a4a2c33-2d0e-43df-b51c-d905aba99e17\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Sensor Operators\",\n \"created_at\": \"2023-06-14T13:33:52.249784Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Security Engineers\",\n \"description\": \"This group would be responsible for securing the platform.\",\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"parent_id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"name\": \"Customer Support\",\n \"description\": \"This group would be responsible for providing support to users of the platform.\",\n \"created_at\": \"2023-06-15T09:42:34.063997Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#get-group-parents","title":"Get Group Parents","text":"You can get all groups that are parents of a group for a logged-in user.
If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
curl -sSiX GET http://localhost/groups/<group_id>/parents -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/groups/dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a/parents?tree=true -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 10:16:03 GMT\nContent-Type: application/json\nContent-Length: 627\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 10,\n \"offset\": 0,\n \"total\": 3,\n \"groups\": [\n {\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Security Engineers\",\n \"description\": \"This group would be responsible for securing the platform.\",\n \"level\": -1,\n \"children\": [\n {\n \"id\": \"dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"parent_id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"name\": \"Customer Support\",\n \"description\": \"This group would be responsible for providing support to users of the platform.\",\n \"created_at\": \"2023-06-15T09:42:34.063997Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ],\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#get-group-children","title":"Get Group Children","text":"You can get all groups that are children of a group for a logged-in user.
If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, parentID
, ownerID
, tree
and dir
as query parameters.
curl -sSiX GET http://localhost/groups/<group_id>/children -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e/children?tree=true -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 10:17:13 GMT\nContent-Type: application/json\nContent-Length: 755\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 10,\n \"offset\": 0,\n \"total\": 3,\n \"groups\": [\n {\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Security Engineers\",\n \"description\": \"This group would be responsible for securing the platform.\",\n \"path\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"children\": [\n {\n \"id\": \"dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"parent_id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"name\": \"Customer Support\",\n \"description\": \"This group would be responsible for providing support to users of the platform.\",\n \"level\": 1,\n \"path\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e.dd2dc8d4-f7cf-42f9-832b-81cae9a8e90a\",\n \"created_at\": \"2023-06-15T09:42:34.063997Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ],\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#update-group","title":"Update group","text":"Update group entity
curl -sSiX PUT http://localhost/groups/<group_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"[group_name]\",\n \"description\":\"[group_description]\",\n \"metadata\": {\n \"[key1]\": \"[value1]\",\n \"[key2]\": \"[value2]\"\n }\n}\nEOF\n
For example:
curl -sSiX PUT http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"name\":\"Data Analysts\",\n \"description\":\"This group would be responsible for analyzing data collected from sensors.\",\n \"metadata\": {\n \"location\": \"london\"\n }\n}\nEOF\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 10:17:56 GMT\nContent-Type: application/json\nContent-Length: 328\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Data Analysts\",\n \"description\": \"This group would be responsible for analyzing data collected from sensors.\",\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"2023-06-15T10:17:56.475241Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#disable-group","title":"Disable group","text":"Disable a group entity
curl -sSiX POST http://localhost/groups/<group_id>/disable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX POST http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e/disable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 10:18:28 GMT\nContent-Type: application/json\nContent-Length: 329\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Data Analysts\",\n \"description\": \"This group would be responsible for analyzing data collected from sensors.\",\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"2023-06-15T10:17:56.475241Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"disabled\"\n}\n
"},{"location":"api/#enable-group","title":"Enable group","text":"Enable a group entity
curl -sSiX POST http://localhost/groups/<group_id>/enable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX POST http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e/enable -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 10:18:55 GMT\nContent-Type: application/json\nContent-Length: 328\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"id\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"name\": \"Data Analysts\",\n \"description\": \"This group would be responsible for analyzing data collected from sensors.\",\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-15T09:41:42.860481Z\",\n \"updated_at\": \"2023-06-15T10:17:56.475241Z\",\n \"updated_by\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"status\": \"enabled\"\n}\n
"},{"location":"api/#assign","title":"Assign","text":"Assign user to a group
curl -sSiX POST http://localhost/users/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"<user_id>\",\n \"object\": \"<group_id>\",\n \"actions\": [\"<member_action>\"]\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/users/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"object\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"actions\": [\"g_list\", \"c_list\"]\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 10:19:59 GMT\nContent-Type: application/json\nContent-Length: 0\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"api/#members","title":"Members","text":"You can get all users assigned to a group.
If you want to paginate your results then use offset
, limit
, metadata
, name
, status
, identity
, and tag
as query parameters.
Must take into consideration the user identified by the user_token
needs to be assigned to the same group identified by group_id
with g_list
action or be the owner of the group identified by group_id
.
curl -sSiX GET http://localhost/groups/<group_id>/members -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX GET http://localhost/groups/2766ae94-9a08-4418-82ce-3b91cf2ccd3e/members -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 11:21:29 GMT\nContent-Type: application/json\nContent-Length: 318\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 10,\n \"total\": 1,\n \"members\": [\n {\n \"id\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"name\": \"Jane Doe\",\n \"tags\": [\"male\", \"developer\"],\n \"credentials\": { \"identity\": \"updated.jane.doe@gmail.com\" },\n \"metadata\": { \"location\": \"london\" },\n \"created_at\": \"2023-06-14T13:46:47.322648Z\",\n \"updated_at\": \"2023-06-14T13:59:53.422595Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"api/#unassign","title":"Unassign","text":"Unassign user from group
curl -sSiX DELETE http://localhost/users/policies/<subject_id>/<object_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX DELETE http://localhost/users/policies/1890c034-7ef9-4cde-83df-d78ea1d4d281/2766ae94-9a08-4418-82ce-3b91cf2ccd3e -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 204 No Content\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 11:25:27 GMT\nContent-Type: application/json\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"api/#policies","title":"Policies","text":""},{"location":"api/#add-policies","title":"Add policies","text":"Only actions defined on Predefined Policies section are allowed.
curl -sSiX POST http://localhost/users/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"<user_id>\",\n \"object\": \"<group_id>\",\n \"actions\": [\"<actions>\", \"[actions]\"]\n}\nEOF\n
curl -sSiX POST http://localhost/things/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"<thing_id>\",\n \"object\": \"<channel_id>\",\n \"actions\": [\"<actions>\", \"[actions]\"]\n}\nEOF\n
curl -sSiX POST http://localhost/things/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"<user_id>\",\n \"object\": \"<channel_id>\",\n \"actions\": [\"<actions>\", \"[actions]\"]\n \"external\": true\n}\nEOF\n
For example:
curl -sSiX POST http://localhost/users/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"object\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"actions\": [\"g_add\", \"c_list\"]\n}\nEOF\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 11:26:50 GMT\nContent-Type: application/json\nContent-Length: 0\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"api/#update-policies","title":"Update policies","text":"Only actions defined on Predefined Policies section are allowed.
curl -sSiX PUT http://localhost/users/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"<user_id>\",\n \"object\": \"<group_id>\",\n \"actions\": [\"<actions>\", \"[actions]\"]\n}\nEOF\n
curl -sSiX PUT http://localhost/things/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"<thing_id> | <user_id>\",\n \"object\": \"<channel_id>\",\n \"actions\": [\"<actions>\", \"[actions]\"]\n}\nEOF\n
For example:
curl -sSiX PUT http://localhost/users/policies -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d @- << EOF\n{\n \"subject\": \"1890c034-7ef9-4cde-83df-d78ea1d4d281\",\n \"object\": \"2766ae94-9a08-4418-82ce-3b91cf2ccd3e\",\n \"actions\": [\"g_list\", \"c_list\"]\n}\nEOF\n\nHTTP/1.1 204 No Content\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 11:27:19 GMT\nContent-Type: application/json\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"api/#delete-policies","title":"Delete policies","text":"Only policies defined on Predefined Policies section are allowed.
curl -sSiX DELETE http://localhost/users/policies/<user_id>/<channel_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
curl -sSiX DELETE http://localhost/things/policies/<thing_id>/<channel_id> -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -sSiX DELETE http://localhost/users/policies/1890c034-7ef9-4cde-83df-d78ea1d4d281/2766ae94-9a08-4418-82ce-3b91cf2ccd3e -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n\nHTTP/1.1 204 No Content\nServer: nginx/1.23.3\nDate: Thu, 15 Jun 2023 11:28:31 GMT\nContent-Type: application/json\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"architecture/","title":"Architecture","text":""},{"location":"architecture/#components","title":"Components","text":"Mainflux IoT platform is comprised of the following services:
Service Description users Manages platform's users and auth concerns in regards to users and groups things Manages platform's things, channels and auth concerns in regards to things and channels http-adapter Provides an HTTP interface for sending messages via HTTP mqtt-adapter Provides an MQTT and MQTT over WS interface for sending and receiving messages via MQTT ws-adapter Provides a WebSocket interface for sending and receiving messages via WS coap-adapter Provides a CoAP interface for sending and receiving messages via CoAP opcua-adapter Provides an OPC-UA interface for sending and receiving messages via OPC-UA lora-adapter Provides a LoRa Server forwarder for sending and receiving messages via LoRa mainflux-cli Command line interface "},{"location":"architecture/#domain-model","title":"Domain Model","text":"The platform is built around 2 main entities: users and things.
User
represents the real (human) user of the system. Users are represented via their email address used as their identity, and password used as their secret, which they use as platform access credentials in order to obtain an access token. Once logged into the system, a user can manage their resources (i.e. groups, things and channels) in CRUD fashion and define access control policies by connecting them.
Group
represents a logical groupping of users. It is used to simplify access control management by allowing users to be grouped together. When assigning a user to a group, we create a policy that defines what that user can do with the resources of the group. This way, a user can be assigned to multiple groups, and each group can have multiple users assigned to it. Users in one group have access to other users in the same group as long as they have the required policy. A group can also be assigned to another group, thus creating a group hierarchy. When assigning a user to a group we create a policy that defines what that user can do with the group and other users in the group.
Thing
represents devices (or applications) connected to Mainflux that uses the platform for message exchange with other \"things\".
Channel
represents a communication channel. It serves as a message topic that can be consumed by all of the things connected to it. It also servers as grouping mechanism for things. A thing can be connected to multiple channels, and a channel can have multiple things connected to it. A user can be connected to a channel as well, thus allowing them to have an access to the messages published to that channel and also things connected to that channel with the required policy. A channel can also be assigned to another channel, thus creating a channel hierarchy. Both things and users can be assigned to a channel. When assigning a thing to a channel, we create a policy that defines what that thing can do to the channel, for example reading or writing messages to it. When assigning a user to a channel, we create a policy that defines what that user can do with the channel and things connected to it, hereby enabling the sharing of things between users.
Mainflux uses NATS as its default messaging backbone, due to its lightweight and performant nature. You can treat its subjects as physical representation of Mainflux channels, where subject name is constructed using channel unique identifier. Mainflux also provides the ability to change your default message broker to RabbitMQ, VerneMQ or Kafka.
In general, there is no constrained put on content that is being exchanged through channels. However, in order to be post-processed and normalized, messages should be formatted using SenML.
"},{"location":"architecture/#edge","title":"Edge","text":"Mainflux platform can be run on the edge as well. Deploying Mainflux on a gateway makes it able to collect, store and analyze data, organize and authenticate devices. To connect Mainflux instances running on a gateway with Mainflux in a cloud we can use two gateway services developed for that purpose:
Running Mainflux on gateway moves computation from cloud towards the edge thus decentralizing IoT system. Since we can deploy same Mainflux code on gateway and in the cloud there are many benefits but the biggest one is easy deployment and adoption - once engineers understand how to deploy and maintain the platform, they will be able to apply those same skills to any part of the edge-fog-cloud continuum. This is because the platform is designed to be consistent, making it easy for engineers to move between them. This consistency will save engineers time and effort, and it will also help to improve the reliability and security of the platform. Same set of tools can be used, same patches and bug fixes can be applied. The whole system is much easier to reason about, and the maintenance is much easier and less costly.
"},{"location":"authentication/","title":"Authentication","text":""},{"location":"authentication/#user-authentication","title":"User authentication","text":"For user authentication Mainflux uses Authentication keys. There are two types of authentication keys:
Authentication keys are represented and distributed by the corresponding JWT. User keys are issued when user logs in. Each user request (other than registration and login) contains user key that is used to authenticate the user.
Recovery key is the password recovery key. It's short-lived token used for password recovery process.
The following actions are supported:
By default, Mainflux uses Mainflux Thing secret for authentication. The Thing secret is a secret key that's generated at the Thing creation. In order to authenticate, the Thing needs to send its secret with the message. The way the secret is passed depends on the protocol used to send a message and differs from adapter to adapter. For more details on how this secret is passed around, please check out messaging section. This is the default Mainflux authentication mechanism and this method is used if the composition is started using the following command:
docker-compose -f docker/docker-compose.yml up\n
"},{"location":"authentication/#mutual-tls-authentication-with-x509-certificates","title":"Mutual TLS Authentication with X.509 Certificates","text":"In most of the cases, HTTPS, WSS, MQTTS or secure CoAP are secure enough. However, sometimes you might need an even more secure connection. Mainflux supports mutual TLS authentication (mTLS) based on X.509 certificates. By default, the TLS protocol only proves the identity of the server to the client using the X.509 certificate and the authentication of the client to the server is left to the application layer. TLS also offers client-to-server authentication using client-side X.509 authentication. This is called two-way or mutual authentication. Mainflux currently supports mTLS over HTTP, WS, MQTT and MQTT over WS protocols. In order to run Docker composition with mTLS turned on, you can execute the following command from the project root:
AUTH=x509 docker-compose -f docker/docker-compose.yml up -d\n
Mutual authentication includes client-side certificates. Certificates can be generated using the simple script provided here. In order to create a valid certificate, you need to create Mainflux thing using the process described in the provisioning section. After that, you need to fetch created thing secret. Thing secret will be used to create x.509 certificate for the corresponding thing. To create a certificate, execute the following commands:
cd docker/ssl\nmake ca CN=<common_name> O=<organization> OU=<organizational_unit> emailAddress=<email_address>\nmake server_cert CN=<common_name> O=<organization> OU=<organizational_unit> emailAddress=<email_address>\nmake thing_cert THING_SECRET=<thing_secret> CRT_FILE_NAME=<cert_name> O=<organization> OU=<organizational_unit> emailAddress=<email_address>\n
These commands use OpenSSL tool, so please make sure that you have it installed and set up before running these commands. The default values for Makefile variables are
CRT_LOCATION = certs\nTHING_SECRET = d7cc2964-a48b-4a6e-871a-08da28e7883d\nO = Mainflux\nOU = mainflux\nEA = info@mainflux.com\nCN = localhost\nCRT_FILE_NAME = thing\n
Normally, in order to get things running, you will need to specify only THING_SECRET
. The other variables are not mandatory and the termination should work with the default values.
make ca
will generate a self-signed certificate that will later be used as a CA to sign other generated certificates. CA will expire in 3 years.make server_cert
will generate and sign (with previously created CA) server cert, which will expire after 1000 days. This cert is used as a Mainflux server-side certificate in usual TLS flow to establish HTTPS or MQTTS connection.make thing_cert
will finally generate and sign a client-side certificate and private key for the thing.In this example <thing_secret>
represents secret of the thing and <cert_name>
represents the name of the certificate and key file which will be saved in docker/ssl/certs
directory. Generated Certificate will expire after 2 years. The key must be stored in the x.509 certificate CN
field. This script is created for testing purposes and is not meant to be used in production. We strongly recommend avoiding self-signed certificates and using a certificate management tool such as Vault for the production.
Once you have created CA and server-side cert, you can spin the composition using:
AUTH=x509 docker-compose -f docker/docker-compose.yml up -d\n
Then, you can create user and provision things and channels. Now, in order to send a message from the specific thing to the channel, you need to connect thing to the channel and generate corresponding client certificate using aforementioned commands. To publish a message to the channel, thing should send following request:
"},{"location":"authentication/#wss","title":"WSS","text":"const WebSocket = require(\"ws\");\n// Do not verify self-signed certificates if you are using one.\nprocess.env.NODE_TLS_REJECT_UNAUTHORIZED = \"0\";\n// Replace <channel_id> and <thing_secret> with real values.\nconst ws = new WebSocket(\n \"wss://localhost/ws/channels/<channel_id>/messages?authorization=<thing_secret>\",\n // This is ClientOptions object that contains client cert and client key in the form of string. You can easily load these strings from cert and key files.\n {\n cert: `-----BEGIN CERTIFICATE-----....`,\n key: `-----BEGIN RSA PRIVATE KEY-----.....`,\n }\n);\nws.on(\"open\", () => {\n ws.send(\"something\");\n});\nws.on(\"message\", (data) => {\n console.log(data);\n});\nws.on(\"error\", (e) => {\n console.log(e);\n});\n
As you can see, Authorization
header does not have to be present in the HTTP request, since the secret is present in the certificate. However, if you pass Authorization
header, it must be the same as the key in the cert. In the case of MQTTS, password
filed in CONNECT message must match the key from the certificate. In the case of WSS, Authorization
header or authorization
query parameter must match cert key.
curl -s -S -i --cacert docker/ssl/certs/ca.crt --cert docker/ssl/certs/<thing_cert_name>.crt --key docker/ssl/certs/<thing_cert_key>.key -X POST -H \"Content-Type: application/senml+json\" https://localhost/http/channels/<channel_id>/messages -d '[{\"bn\":\"some-base-name:\",\"bt\":1.276020076001e+09, \"bu\":\"A\",\"bver\":5, \"n\":\"voltage\",\"u\":\"V\",\"v\":120.1}, {\"n\":\"current\",\"t\":-5,\"v\":1.2}, {\"n\":\"current\",\"t\":-4,\"v\":1.3}]'\n
"},{"location":"authentication/#mqtts","title":"MQTTS","text":""},{"location":"authentication/#publish","title":"Publish","text":"mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages -h localhost -p 8883 --cafile docker/ssl/certs/ca.crt --cert docker/ssl/certs/<thing_cert_name>.crt --key docker/ssl/certs/<thing_cert_key>.key -m '[{\"bn\":\"some-base-name:\",\"bt\":1.276020076001e+09, \"bu\":\"A\",\"bver\":5, \"n\":\"voltage\",\"u\":\"V\",\"v\":120.1}, {\"n\":\"current\",\"t\":-5,\"v\":1.2}, {\"n\":\"current\",\"t\":-4,\"v\":1.3}]'\n
"},{"location":"authentication/#subscribe","title":"Subscribe","text":"mosquitto_sub -u <thing_id> -P <thing_secret> --cafile docker/ssl/certs/ca.crt --cert docker/ssl/certs/<thing_cert_name>.crt --key docker/ssl/certs/<thing_cert_key>.key -t channels/<channel_id>/messages -h localhost -p 8883\n
"},{"location":"authorization/","title":"Authorization","text":""},{"location":"authorization/#policies","title":"Policies","text":"Mainflux uses policies to control permissions on entities: users, things, groups and channels. Under the hood, Mainflux uses its own fine grained access control list. Policies define permissions for the entities. For example, which user has access to a specific thing. Such policies have three main components: subject, object, and action.
To put it briefly:
Subject: As the name suggests, it is the subject that will have the policy such as users or things. Mainflux uses entity UUID on behalf of the real entities.
Object: Objects are Mainflux entities (e.g. channels or group ) represented by their UUID.
Action: This is the action that the subject wants to do on the object. This is one of the supported actions (read, write, update, delete, list or add)
Above this we have a domain specifier called entityType. This either specific group level access or client level acess. With client entity a client can have an action to another client in the same group. While group entity a client has an action to a group i.e direct association.
All three components create a single policy.
// Policy represents an argument struct for making policy-related function calls.\n\ntype Policy struct {\n Subject string `json:\"subject\"`\n Object string `json:\"object\"`\n Actions []string `json:\"actions\"`\n}\n\nvar examplePolicy = Policy{\n Subject: userID,\n Object: groupID,\n Actions: []string{groupListAction},\n}\n
Policies handling initial implementation are meant to be used on the Group level.
There are three types of policies:
m_ Policy represents client rights to send and receive messages to a channel. Only channel members with corresponding rights can publish or receive messages to/from the channel. m_read and m_write are the only supported actions. With m_read the client can read messages from the channel. With m_write the client can write messages to the channel.
g_ Policy represents the client's rights to modify the group/channel itself. Only group/channel members with correct rights can modify or update the group/channel, or add/remove members to/from the group. g_add, g_list, g_update and g_delete are the only supported actions. With g_add the client can add members to the group/channel. With g_list the client can list the group/channel and its members. With g_update the client can update the group/channel. With g_delete the client can delete the group/channel.
Finally, the c_ policy represents the rights the member has over other members of the group/channel. Only group/channel members with correct rights can modify or update other members of the group/channel. c_list, c_update, c_share and c_delete are the only supported actions. With c_list the client can list other members of the group/channel. With c_update the client can update other members of the group/channel. With c_share the client can share the group/channel with other clients. With c_delete the client can delete other members of the group/channel.
By default, mainflux adds listing action to c_ and g_ policies. This means that all members of the group/channel can list the its members. When adding a new member to a group with g_add, g_update or g_delete action, mainflux will automatically add g_list action to the new member's policy. This means that the new member will be able to list the group/channel. When adding a new member to a group/channel with c_update or c_delete action, mainflux will automatically add c_list action to the new member's policy. This means that the new member will be able to list the members of the group/channel.
"},{"location":"authorization/#example","title":"Example","text":"The rules are specified in the policies association table. The table looks like this:
subject object actions clientA groupA [\"g_add\", \"g_list\", \"g_update\", \"g_delete\"] clientB groupA [\"c_list\", \"c_update\", \"c_delete\"] clientC groupA [\"c_update\"] clientD groupA [\"c_list\"] clientE groupB [\"c_list\", \"c_update\", \"c_delete\"] clientF groupB [\"c_update\"] clientD groupB [\"c_list\"] clientG groupC [\"m_read\"] clientH groupC [\"m_read\", \"m_write\"]Actions such as c_list
, and c_update
represent actions that allowed for the client with client_id
to execute over all the other clients that are members of the group with gorup_id
. Actions such as g_update
represent actions allowed for the client with client_id
to execute against a group with group_id
.
For the sake of simplicity, all the operations at the moment are executed on the group level - the group acts as a namespace in the context of authorization and is required.
Actions for clientA
they can add members to groupA
clientA
lists groups groupA
will be listedclientA
can list members of groupA
groupA
they can change the status of groupA
Actions for clientB
when they list clients they will list clientA
, clientC
and clientD
since they are connected in the same group groupA
and they have c_list
actions.
clientA
, clientC
and clientD
since they are in the same groupA
they can change clients status of clients connected to the same group they are connected in i.e they are able to change the status of clientA
, clientC
and clientD
since they are in the same group groupA
Actions for clientC
they can update clients connected to the same group they are connected in i.e they can update clientA
, clientB
and clientD
since they are in the same groupA
Actions for clientD
when they list clients they will list clientA
, clientB
and clientC
since they are connected in the same group groupA
and they have c_list
actions and also clientE
and clientF
since they are connected to the same group groupB
and they have c_list
actions
Actions for clientE
when they list clients they will list clientF
and clientD
since they are connected in the same group groupB
and they have c_list
actions
clientF
and clientD
since they are in the same groupB
they can change clients status of clients connected to the same group they are connected in i.e they are able to change the status of clientF
and clientD
since they are in the same group groupB
Actions for clientF
they can update clients connected to the same group they are connected in i.e they can update clientE
, and clientD
since they are in the same groupB
Actions for clientG
they can read messages posted in group groupC
Actions for clientH
they can read from groupC
and write messages to groupC
If the user has no such policy, the operation will be denied; otherwise, the operation will be allowed.
In order to check whether a user has the policy or not, Mainflux makes a gRPC call to policies API, then policies sub-service handles the checking existence of the policy.
All policies are stored in the Postgres Database. The database responsible for storing all policies is deployed along with the Mainflux.
"},{"location":"authorization/#predefined-policies","title":"Predefined Policies","text":"Mainflux comes with predefined policies.
"},{"location":"authorization/#users-service-related-policies","title":"Users service related policies","text":"<admin_id>
has admin
role as part of its description.Things
: c_update
, c_list
, c_share
and c_delete
.c_update
, c_list
and c_delete
policies on the Thing
since they are the owner.c_list
policy on that thing.c_update
policy on that thing.c_share
policy on that thing.c_delete
policy on that thing.g_add
, g_update
, g_list
and g_delete
policy on the group.You can add policies as well through an HTTP endpoint. Only admin or member with g_add
policy to the object can use this endpoint. Therefore, you need an authentication token.
user_token must belong to the user.
Must-have: user_token, group_id, user_id and policy_actions
curl -isSX POST 'http://localhost/users/policies' -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d '{\"subject\": \"<user_id>\", \"object\": \"<group_id>\", \"actions\": [\"<action_1>\", ..., \"<action_N>\"]}'\n
For example:
curl -isSX POST 'http://localhost/users/policies' -H \"Content-Type: application/json\" -H \"Authorization: Bearer $USER_TOKEN\" -d '{\"subject\": \"0b530292-3c1d-4c7d-aff5-b141b5c5d3e9\", \"object\": \"0a4a2c33-2d0e-43df-b51c-d905aba99e17\", \"actions\": [\"c_list\", \"g_list\"]}'\n\nHTTP/1.1 201 Created\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:40:06 GMT\nContent-Type: application/json\nContent-Length: 0\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"authorization/#updating-policies","title":"Updating Policies","text":"Must-have: user_token, group_id, user_id and policy_actions
curl -isSX PUT 'http://localhost/users/policies' -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" -d '{\"subject\": \"<user_id>\", \"object\": \"<group_id>\", \"actions\": [\"<action_1>\", ..., \"<action_N>\"]}'\n
For example:
curl -isSX PUT 'http://localhost/users/policies' -H \"Content-Type: application/json\" -H \"Authorization: Bearer $USER_TOKEN\" -d '{\"subject\": \"0b530292-3c1d-4c7d-aff5-b141b5c5d3e9\", \"object\": \"0a4a2c33-2d0e-43df-b51c-d905aba99e17\", \"actions\": [\"c_delete\"]}'\n\nHTTP/1.1 204 No Content\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:41:00 GMT\nContent-Type: application/json\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
"},{"location":"authorization/#lisiting-policies","title":"Lisiting Policies","text":"Must-have: user_token
curl -isSX GET 'http://localhost/users/policies' -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\"\n
For example:
curl -isSX GET 'http://localhost/users/policies' -H \"Content-Type: application/json\" -H \"Authorization: Bearer $USER_TOKEN\"\n\nHTTP/1.1 200 OK\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:41:32 GMT\nContent-Type: application/json\nContent-Length: 305\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n\n{\n \"limit\": 10,\n \"offset\": 0,\n \"total\": 1,\n \"policies\": [\n {\n \"owner_id\": \"94939159-d129-4f17-9e4e-cc2d615539d7\",\n \"subject\": \"0b530292-3c1d-4c7d-aff5-b141b5c5d3e9\",\n \"object\": \"0a4a2c33-2d0e-43df-b51c-d905aba99e17\",\n \"actions\": [\"c_delete\"],\n \"created_at\": \"2023-06-14T13:40:06.582315Z\",\n \"updated_at\": \"2023-06-14T13:41:00.636733Z\"\n }\n ]\n}\n
"},{"location":"authorization/#delete-policies","title":"Delete Policies","text":"The admin can delete policies. Only policies defined on Predefined Policies section are allowed.
Must-have: user_token, object, subjects_ids and policies
curl -isSX DELETE -H \"Accept: application/json\" -H \"Authorization: Bearer <user_token>\" http://localhost/users/policies -d '{\"subject\": \"user_id\", \"object\": \"<group_id>\"}'\n
For example:
curl -isSX DELETE -H 'Accept: application/json' -H \"Authorization: Bearer $USER_TOKEN\" http://localhost/users/policies -d '{\"subject\": \"0b530292-3c1d-4c7d-aff5-b141b5c5d3e9\", \"object\": \"0a4a2c33-2d0e-43df-b51c-d905aba99e17\"}'\n\nHTTP/1.1 204 No Content\nServer: nginx/1.23.3\nDate: Wed, 14 Jun 2023 13:43:46 GMT\nContent-Type: application/json\nConnection: keep-alive\nAccess-Control-Expose-Headers: Location\n
If you delete policies, the policy will be removed from the policy storage. Further authorization checks related to that policy will fail.
"},{"location":"benchmark/","title":"Test spec","text":""},{"location":"benchmark/#tools","title":"Tools","text":"MZbench is open-source tool for that can generate large traffic and measure performance of the application. MZBench is distributed, cloud-aware benchmarking tool that can seamlessly scale to millions of requests. It's originally developed by satori-com but we will use mzbench fork because it can run with newest Erlang releases and the original MzBench repository is not maintained anymore.
We will describe installing MZBench server on Ubuntu 18.04 (this can be on your PC or some external cloud server, like droplet on Digital Ocean)
Install latest OTP/Erlang (it's version 22.3 for me)
sudo apt update\nsudo apt install erlang\n
For running this tool you will also need libz-dev package:
sudo apt-get update\nsudo apt-get install libz-dev\n
and pip:
sudo apt install python-pip\n
Clone mzbench tool and install the requirements:
git clone https://github.com/mzbench/mzbench\ncd mzbench\nsudo pip install -r requirements.txt\n
This should be enough for installing MZBench, and you can now start MZBench server with this CLI command:
./bin/mzbench start_server\n
The MZBench CLI lets you control the server and benchmarks from the command line.
Another way of using MZBench is over Dashboard. After starting server you should check dashboard on http://localhost:4800
.
Note that if you are installing MZBench on external server (i.e. Digital Ocean droplet), that you'll be able to reach MZBench dashboard on your server's IP address:4800, if you previously:
network_interface
from 127.0.0.1
to 0.0.0.0
in configuration file. Default configuration file location is ~/.config/mzbench/server.config
, create it from sample configuration file ~/.config/mzbench/server.config.example
4800
with ufw allow 4800
MZBench can run your test scenarios on many nodes, simultaneously. For now, you are able to run tests locally, so your nodes will be virtual nodes on machine where MZBench server is installed (your PC or DO droplet). You can try one of our MQTT scenarios that uses vmq_mzbench worker. Copy-paste scenario in MZBench dashboard, click button Environmental variables -> Add from script and add appropriate values. Because it's running locally, you should try with smaller values, for example for fan-in scenario use 100 publishers on 2 nodes. Try this before moving forward in setting up Amazon EC2 plugin.
"},{"location":"benchmark/#setting-up-amazon-ec2-plugin","title":"Setting up Amazon EC2 plugin","text":"For larger-scale tests we will set up MZBench to run each node as one of Amazon EC2 instance with built-in plugin mzb_api_ec2_plugin.
This is basic architecture when running MZBench:
Every node that runs your scenarios will be one of Amazon EC2 instance; plus one more additional node \u2014 the director node. The director doesn't run scenarios, it collects the metrics from the other nodes and runs post and pre hooks. So, if you want to run jobs on 10 nodes, actually 11 EC2 instances will be created. All instances will be automatically terminated when the test finishes.
We will use one of ready-to-use Amazon Machine Images (AMI) with all necessary dependencies. We will choose AMI with OTP 22, because that is the version we have on MZBench server. So, we will search for MZBench-erl22
AMI and find one with id ami-03a169923be706764
available in us-west-1b
zone. If you have chosen this AMI, everything you do from now must be in us-west-1 zone. We must have IAM user with AmazonEC2FullAccess
and IAMFullAccess
permissions policies, and his access_key_id
and secret_access_key
goes to configuration file. In EC2 dashboard, you must create new security group MZbench_cluster
where you will add inbound rules to open ssh and TCP ports 4801-4804. Also, in EC2 dashboard go to section key pairs
, click Actions
-> Import key pair
and upload public key you have on your MZBench server in ~/.ssh/id_rsa.pub
(if you need to create new, run ssh-keygen
and follow instructions). Give it a name on EC2 dashboard, put that name (key_name
) and path (keyfile
) in configuration file.
[\n{mzbench_api, [\n{network_interface,\"0.0.0.0\"},\n{keyfile, \"~/.ssh/id_rsa\"},\n{cloud_plugins, [\n {local,#{module => mzb_dummycloud_plugin}},\n {ec2, #{module => mzb_api_ec2_plugin,\n instance_spec => [\n {image_id, \"ami-03a169923be706764\"},\n {group_set, [\"MZbench_cluster\"]},\n {instance_type, \"t2.micro\"},\n {availability_zone, \"us-west-1b\"},\n {iam_instance_profile_name, \"mzbench\"},\n {key_name, \"key_pair_name\"}\n ],\n config => [\n {ec2_host, \"ec2.us-west-1.amazonaws.com\"},\n {access_key_id, \"IAM_USER_ACCESS_KEY_ID\"},\n {secret_access_key, \"IAM_USER_SECRET_ACCESS_KEY\"}\n ],\n instance_user => \"ec2-user\"\n }}\n ]\n}\n]}].\n
There is both local
and ec2
plugin in this configuration file, so you can choose to run tests on either of them. Default path for configuration file is ~/.config/mzbench/server.config
, if it's somewhere else, server is starting with:
./bin/mzbench start_server --config <config_file>\n
Note that every time you update the configuration you have to restart the server:
./bin/mzbench restart_server\n
"},{"location":"benchmark/#test-scenarios","title":"Test scenarios","text":"Testing environment to be determined.
"},{"location":"benchmark/#message-publishing","title":"Message publishing","text":"In this scenario, large number of requests are sent to HTTP adapter service every second. This test checks how much time HTTP adapter needs to respond to each request.
"},{"location":"benchmark/#results","title":"Results","text":"TBD
"},{"location":"benchmark/#create-and-get-client","title":"Create and get client","text":"In this scenario, large number of requests are sent to things service to create things and than to retrieve their data. This test checks how much time things service needs to respond to each request.
"},{"location":"benchmark/#results_1","title":"Results","text":"TBD
"},{"location":"bootstrap/","title":"Bootstrap","text":"Bootstrapping
refers to a self-starting process that is supposed to proceed without external input. Mainflux platform supports bootstrapping process, but some of the preconditions need to be fulfilled in advance. The device can trigger a bootstrap when:s
Bootstrapping and provisioning are two different procedures. Provisioning refers to entities management while bootstrapping is related to entity configuration.
Bootstrapping procedure is the following:
1) Configure device with Bootstrap service URL, an external key and external ID
Optionally create Mainflux channels if they don't exist
Optionally create Mainflux thing if it doesn't exist
2) Upload configuration for the Mainflux thing
3) Bootstrap - send a request for the configuration
4) Connect/disconnect thing from channels, update or remove configuration
"},{"location":"bootstrap/#configuration","title":"Configuration","text":"The configuration of Mainflux thing consists of three major parts:
Also, the configuration contains an external ID and external key, which will be explained later. In order to enable the thing to start bootstrapping process, the user needs to upload a valid configuration for that specific thing. This can be done using the following HTTP request:
curl -s -S -i -X POST -H \"Authorization: Bearer <user_token>\" -H \"Content-Type: application/json\" http://localhost:9013/things/configs -d '{\n \"external_id\":\"09:6:0:sb:sa\",\n \"thing_id\": \"7d63b564-3092-4cda-b441-e65fc1f285f0\",\n \"external_key\":\"key\",\n \"name\":\"some\",\n \"channels\":[\n \"78c9b88c-b2c4-4d58-a973-725c32194fb3\",\n \"c4d6edb2-4e23-49f2-b6ea-df8bc6769591\"\n],\n \"content\": \"config...\",\n \"client_cert\": \"PEM cert\",\n \"client_key\": \"PEM client cert key\",\n \"ca_cert\": \"PEM CA cert\"\n}'\n
In this example, channels
field represents the list of Mainflux channel IDs the thing is connected to. These channels need to be provisioned before the configuration is uploaded. Field content
represents custom configuration. This custom configuration contains parameters that can be used to set up the thing. It can also be empty if no additional set up is needed. Field name
is human readable name and thing_id
is an ID of the Mainflux thing. This field is not required. If thing_id
is empty, corresponding Mainflux thing will be created implicitly and its ID will be sent as a part of Location
header of the response. Fields client_cert
, client_key
and ca_cert
represent PEM or base64-encoded DER client certificate, client certificate key and trusted CA, respectively.
There are two more fields: external_id
and external_key
. External ID represents an ID of the device that corresponds to the given thing. For example, this can be a MAC address or the serial number of the device. The external key represents the device key. This is the secret key that's safely stored on the device and it is used to authorize the thing during the bootstrapping process. Please note that external ID and external key and Mainflux ID and Mainflux key are completely different concepts. External id and key are only used to authenticate a device that corresponds to the specific Mainflux thing during the bootstrapping procedure. As Configuration optionally contains client certificate and issuing CA, it's possible that device is not able to establish TLS encrypted communication with Mainflux before bootstrapping. For that purpose, Bootstrap service exposes endpoint used for secure bootstrapping which can be used regardless of protocol (HTTP or HTTPS). Both device and Bootstrap service use a secret key to encrypt the content. Encryption is done as follows:
Please have on mind that secret key is passed to the Bootstrap service as an environment variable. As security measurement, Bootstrap service removes this variable once it reads it on startup. However, depending on your deployment, this variable can still be visible as a part of your configuration or terminal emulator environment.
For more details on which encryption mechanisms are used, please take a look at the implementation.
"},{"location":"bootstrap/#bootstrapping","title":"Bootstrapping","text":"Currently, the bootstrapping procedure is executed over the HTTP protocol. Bootstrapping is nothing else but fetching and applying the configuration that corresponds to the given Mainflux thing. In order to fetch the configuration, the thing needs to send a bootstrapping request:
curl -s -S -i -H \"Authorization: Thing <external_key>\" http://localhost:9013/things/bootstrap/<external_id>\n
The response body should look something like:
{\n \"thing_id\":\"7d63b564-3092-4cda-b441-e65fc1f285f0\",\n \"thing_key\":\"d0f6ff22-f521-4674-9065-e265a9376a78\",\n \"channels\":[\n {\n \"id\":\"c4d6edb2-4e23-49f2-b6ea-df8bc6769591\",\n \"name\":\"c1\",\n \"metadata\":null\n },\n {\n \"id\":\"78c9b88c-b2c4-4d58-a973-725c32194fb3\",\n \"name\":\"c0\",\n \"metadata\":null\n }\n ],\n \"content\":\"cofig...\",\n \"client_cert\":\"PEM cert\",\n \"client_key\":\"PEM client cert key\",\n \"ca_cert\":\"PEM CA cert\"\n}\n
The response consists of an ID and key of the Mainflux thing, the list of channels and custom configuration (content
field). The list of channels contains not just channel IDs, but the additional Mainflux channel data (name
and metadata
fields), as well.
Uploading configuration does not automatically connect thing to the given list of channels. In order to connect the thing to the channels, user needs to send the following HTTP request:
curl -s -S -i -X PUT -H \"Authorization: Bearer <user_token>\" -H \"Content-Type: application/json\" http://localhost:9013/things/state/<thing_id> -d '{\"state\": 1}'\n
In order to disconnect, the same request should be sent with the value of state
set to 0.
For more information about the Bootstrap service API, please check out the API documentation.
"},{"location":"certs/","title":"Certs","text":"Provisioning is a process of configuration of an IoT platform in which system operator creates and sets-up different entities used in the platform - users, groups, channels and things.
"},{"location":"certs/#certs-service","title":"Certs Service","text":"Issues certificates for things. Certs
service can create certificates to be used when Mainflux
is deployed to support mTLS. Certs
service will create certificate for valid thing ID if valid user token is passed and user is owner of the provided thing ID.
Certificate service can create certificates in two modes:
Vault
as PKI certificate management cert
service will proxy requests to Vault
previously checking access rights and saving info on successfully created certificate.If MF_CERTS_VAULT_HOST
is empty than Development mode is on.
To issue a certificate:
\nUSER_TOKEN=`curl -s --insecure -S -X POST https://localhost/users/tokens/issue -H \"Content-Type: application/json\" -d '{\"identity\":\"john.doe@email.com\", \"secret\":\"12345678\"}' | grep -oP '\"access_token\":\"\\K[^\"]+'`\n\ncurl -s -S -X POST http://localhost:9019/certs -H \"Authorization: Bearer $USER_TOKEN\" -H 'Content-Type: application/json' -d '{\"thing_id\":<thing_id>, \"rsa_bits\":2048, \"key_type\":\"rsa\"}'\n
{\n \"ThingID\": \"\",\n \"ClientCert\": \"-----BEGIN CERTIFICATE-----\\nMIIDmTCCAoGgAwIBAgIRANmkAPbTR1UYeYO0Id/4+8gwDQYJKoZIhvcNAQELBQAw\\nVzESMBAGA1UEAwwJbG9jYWxob3N0MREwDwYDVQQKDAhNYWluZmx1eDEMMAoGA1UE\\nCwwDSW9UMSAwHgYJKoZIhvcNAQkBFhFpbmZvQG1haW5mbHV4LmNvbTAeFw0yMDA2\\nMzAxNDIxMDlaFw0yMDA5MjMyMjIxMDlaMFUxETAPBgNVBAoTCE1haW5mbHV4MREw\\nDwYDVQQLEwhtYWluZmx1eDEtMCsGA1UEAxMkYjAwZDBhNzktYjQ2YS00NTk3LTli\\nNGYtMjhkZGJhNTBjYTYyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA\\ntgS2fLUWG3CCQz/l6VRQRJfRvWmdxK0mW6zIXGeeOILYZeaLiuiUnohwMJ4RiMqT\\nuJbInAIuO/Tt5osfrCFFzPEOLYJ5nZBBaJfTIAxqf84Ou1oeMRll4wpzgeKx0rJO\\nXMAARwn1bT9n3uky5QQGSLy4PyyILzSXH/1yCQQctdQB/Ar/UI1TaYoYlGzh7dHT\\nWpcxq1HYgCyAtcrQrGD0rEwUn82UBCrnya+bygNqu0oDzIFQwa1G8jxSgXk0mFS1\\nWrk7rBipsvp8HQhdnvbEVz4k4AAKcQxesH4DkRx/EXmU2UvN3XysvcJ2bL+UzMNI\\njNhAe0pgPbB82F6zkYZ/XQIDAQABo2IwYDAOBgNVHQ8BAf8EBAMCB4AwHQYDVR0l\\nBBYwFAYIKwYBBQUHAwIGCCsGAQUFBwMBMA4GA1UdDgQHBAUBAgMEBjAfBgNVHSME\\nGDAWgBRs4xR91qEjNRGmw391xS7x6Tc+8jANBgkqhkiG9w0BAQsFAAOCAQEAW/dS\\nV4vNLTZwBnPVHUX35pRFxPKvscY+vnnpgyDtITgZHYe0KL+Bs3IHuywtqaezU5x1\\nkZo+frE1OcpRvp7HJtDiT06yz+18qOYZMappCWCeAFWtZkMhlvnm3TqTkgui6Xgl\\nGj5xnPb15AOlsDE2dkv5S6kEwJGHdVX6AOWfB4ubUq5S9e4ABYzXGUty6Hw/ZUmJ\\nhCTRVJ7cQJVTJsl1o7CYT8JBvUUG75LirtoFE4M4JwsfsKZXzrQffTf1ynqI3dN/\\nHWySEbvTSWcRcA3MSmOTxGt5/zwCglHDlWPKMrXtjTW7NPuGL5/P9HSB9HGVVeET\\nDUMdvYwgj0cUCEu3LA==\\n-----END CERTIFICATE-----\\n\",\n \"IssuingCA\": \"\",\n \"CAChain\": null,\n \"ClientKey\": \"-----BEGIN RSA PRIVATE KEY-----\\nMIIEowIBAAKCAQEAtgS2fLUWG3CCQz/l6VRQRJfRvWmdxK0mW6zIXGeeOILYZeaL\\niuiUnohwMJ4RiMqTuJbInAIuO/Tt5osfrCFFzPEOLYJ5nZBBaJfTIAxqf84Ou1oe\\nMRll4wpzgeKx0rJOXMAARwn1bT9n3uky5QQGSLy4PyyILzSXH/1yCQQctdQB/Ar/\\nUI1TaYoYlGzh7dHTWpcxq1HYgCyAtcrQrGD0rEwUn82UBCrnya+bygNqu0oDzIFQ\\nwa1G8jxSgXk0mFS1Wrk7rBipsvp8HQhdnvbEVz4k4AAKcQxesH4DkRx/EXmU2UvN\\n3XysvcJ2bL+UzMNIjNhAe0pgPbB82F6zkYZ/XQIDAQABAoIBAALoal3tqq+/iWU3\\npR2oKiweXMxw3oNg3McEKKNJSH7QoFJob3xFoPIzbc9pBxCvY9LEHepYIpL0o8RW\\nHqhqU6olg7t4ZSb+Qf1Ax6+wYxctnJCjrO3N4RHSfevqSjr6fEQBEUARSal4JNmr\\n0hNUkCEjWrIvrPFMHsn1C5hXR3okJQpGsad4oCGZDp2eZ/NDyvmLBLci9/5CJdRv\\n6roOF5ShWweKcz1+pfy666Q8RiUI7H1zXjPaL4yqkv8eg/WPOO0dYF2Ri2Grk9OY\\n1qTM0W1vi9zfncinZ0DpgtwMTFQezGwhUyJHSYHmjVBA4AaYIyOQAI/2dl5fXM+O\\n9JfXpOUCgYEA10xAtMc/8KOLbHCprpc4pbtOqfchq/M04qPKxQNAjqvLodrWZZgF\\nexa+B3eWWn5MxmQMx18AjBCPwbNDK8Rkd9VqzdWempaSblgZ7y1a0rRNTXzN5DFP\\noiuRQV4wszCuj5XSdPn+lxApaI/4+TQ0oweIZCpGW39XKePPoB5WZiMCgYEA2G3W\\niJncRpmxWwrRPi1W26E9tWOT5s9wYgXWMc+PAVUd/qdDRuMBHpu861Qoghp/MJog\\nBYqt2rQqU0OxvIXlXPrXPHXrCLOFwybRCBVREZrg4BZNnjyDTLOu9C+0M3J9ImCh\\n3vniYqb7S0gRmoDM0R3Zu4+ajfP2QOGLXw1qHH8CgYEAl0EQ7HBW8V5UYzi7XNcM\\nixKOb0YZt83DR74+hC6GujTjeLBfkzw8DX+qvWA8lxLIKVC80YxivAQemryv4h21\\nX6Llx/nd1UkXUsI+ZhP9DK5y6I9XroseIRZuk/fyStFWsbVWB6xiOgq2rKkJBzqw\\nCCEQpx40E6/gsqNDiIAHvvUCgYBkkjXc6FJ55DWMLuyozfzMtpKsVYeG++InSrsM\\nDn1PizQS/7q9mAMPLCOP312rh5CPDy/OI3FCbfI1GwHerwG0QUP/bnQ3aOTBmKoN\\n7YnsemIA/5w16bzBycWE5x3/wjXv4aOWr9vJJ/siMm0rtKp4ijyBcevKBxHpeGWB\\nWAR1FQKBgGIqAxGnBpip9E24gH894BaGHHMpQCwAxARev6sHKUy27eFUd6ipoTva\\n4Wv36iz3gxU4R5B0gyfnxBNiUab/z90cb5+6+FYO13kqjxRRZWffohk5nHlmFN9K\\nea7KQHTfTdRhOLUzW2yVqLi9pzfTfA6Yqf3U1YD3bgnWrp1VQnjo\\n-----END RSA PRIVATE KEY-----\\n\",\n \"PrivateKeyType\": \"\",\n \"Serial\": \"\",\n \"Expire\": \"0001-01-01T00:00:00Z\"\n}\n
"},{"location":"certs/#pki-mode","title":"PKI mode","text":"When MF_CERTS_VAULT_HOST
is set it is presumed that Vault
is installed and certs
service will issue certificates using Vault
API.
First you'll need to set up Vault
.
To setup Vault
follow steps in Build Your Own Certificate Authority (CA).
To setup certs service with Vault
following environment variables must be set:
MF_CERTS_VAULT_HOST=vault-domain.com\nMF_CERTS_VAULT_PKI_PATH=<vault_pki_path>\nMF_CERTS_VAULT_ROLE=<vault_role>\nMF_CERTS_VAULT_TOKEN=<vault_acces_token>\n
For lab purposes you can use docker-compose and script for setting up PKI in https://github.com/mteodor/vault.
Issuing certificate is same as in Development mode. In this mode certificates can also be revoked:
curl -s -S -X DELETE http://localhost:9019/certs/revoke -H \"Authorization: Bearer $TOKEN\" -H 'Content-Type: application/json' -d '{\"thing_id\":\"c30b8842-507c-4bcd-973c-74008cef3be5\"}'\n
For more information about the Certification service API, please check out the API documentation.
"},{"location":"cli/","title":"CLI","text":"Mainflux CLI makes it easy to manage users, things, channels and messages.
CLI can be downloaded as separate asset from project realeses or it can be built with GNU Make
tool:
Get the mainflux code
go get github.com/mainflux/mainflux\n
Build the mainflux-cli
make cli\n
which will build mainflux-cli
in <project_root>/build
folder.
Executing build/mainflux-cli
without any arguments will output help with all available commands and flags:
Usage:\n mainflux-cli [command]\n\nAvailable Commands:\n bootstrap Bootstrap management\n certs Certificates management\n channels Channels management\n completion Generate the autocompletion script for the specified shell\n groups Groups management\n health Health Check\n help Help about any command\n messages Send or read messages\n policies Policies management\n provision Provision things and channels from a config file\n subscription Subscription management\n things Things management\n users Users management\n\nFlags:\n -b, --bootstrap-url string Bootstrap service URL (default \"http://localhost\")\n -s, --certs-url string Certs service URL (default \"http://localhost\")\n -c, --config string Config path\n -C, --contact string Subscription contact query parameter\n -y, --content-type string Message content type (default \"application/senml+json\")\n -e, --email string User email query parameter\n -h, --help help for mainflux-cli\n -p, --http-url string HTTP adapter URL (default \"http://localhost/http\")\n -i, --insecure Do not check for TLS cert\n -l, --limit uint Limit query parameter (default 10)\n -m, --metadata string Metadata query parameter\n -n, --name string Name query parameter\n -o, --offset uint Offset query parameter\n -r, --raw Enables raw output mode for easier parsing of output\n -R, --reader-url string Reader URL (default \"http://localhost\")\n -z, --state string Bootstrap state query parameter\n -S, --status string User status query parameter\n -t, --things-url string Things service URL (default \"http://localhost\")\n -T, --topic string Subscription topic query parameter\n -u, --users-url string Users service URL (default \"http://localhost\")\n\nUse \"mainflux-cli [command] --help\" for more information about a command.\n
It is also possible to use the docker image mainflux/cli
to execute CLI command:
docker run -it --rm mainflux/cli -u http://<IP_SERVER> [command]\n
For example:
docker run -it --rm mainflux/cli -u http://192.168.160.1 users token admin@example.com 12345678\n\n{\n \"access_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA2MjEzMDcsImlhdCI6MTY4MDYyMDQwNywiaWRlbnRpdHkiOiJhZG1pbkBleGFtcGxlLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6ImYxZTA5Y2YxLTgzY2UtNDE4ZS1iZDBmLWU3M2I3M2MxNDM2NSIsInR5cGUiOiJhY2Nlc3MifQ.iKdBv3Ko7PKuhjTC6Xs-DvqfKScjKted3ZMorTwpXCd4QrRSsz6NK_lARG6LjpE0JkymaCMVMZlzykyQ6ZgwpA\",\n \"access_type\": \"Bearer\",\n \"refresh_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA3MDY4MDcsImlhdCI6MTY4MDYyMDQwNywiaWRlbnRpdHkiOiJhZG1pbkBleGFtcGxlLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6ImYxZTA5Y2YxLTgzY2UtNDE4ZS1iZDBmLWU3M2I3M2MxNDM2NSIsInR5cGUiOiJyZWZyZXNoIn0.-0tOtXFZi48VS-FxkCnVxnW2RUkJvqUmzRz3_EYSSKFyKealoFrv7sZIUvrdvKomnUFzXshP0EygL8vjWP1SFw\"\n}\n
You can execute each command with -h
flag for more information about that command, e.g.
mainflux-cli channels -h\n
Response should look like this:
Channels management: create, get, update or delete Channel and get list of Things connected or not connected to a Channel\n\nUsage:\n mainflux-cli channels [command]\n\nAvailable Commands:\n connections Connections list\n create Create channel\n disable Change channel status to disabled\n enable Change channel status to enabled\n get Get channel\n update Update channel\n\nFlags:\n -h, --help help for channels\n\nGlobal Flags:\n -b, --bootstrap-url string Bootstrap service URL (default \"http://localhost\")\n -s, --certs-url string Certs service URL (default \"http://localhost\")\n -c, --config string Config path\n -C, --contact string Subscription contact query parameter\n -y, --content-type string Message content type (default \"application/senml+json\")\n -e, --email string User email query parameter\n -h, --help help for mainflux-cli\n -p, --http-url string HTTP adapter URL (default \"http://localhost/http\")\n -i, --insecure Do not check for TLS cert\n -l, --limit uint Limit query parameter (default 10)\n -m, --metadata string Metadata query parameter\n -n, --name string Name query parameter\n -o, --offset uint Offset query parameter\n -r, --raw Enables raw output mode for easier parsing of output\n -R, --reader-url string Reader URL (default \"http://localhost\")\n -z, --state string Bootstrap state query parameter\n -S, --status string User status query parameter\n -t, --things-url string Things service URL (default \"http://localhost\")\n -T, --topic string Subscription topic query parameter\n -u, --users-url string Users service URL (default \"http://localhost\")\n\n\nUse \"mainflux-cli channels [command] --help\" for more information about a command.\n
"},{"location":"cli/#service","title":"Service","text":""},{"location":"cli/#get-mainflux-things-services-health-check","title":"Get Mainflux Things services health check","text":"mainflux-cli health\n
Response should look like this:
{\n \"build_time\": \"2023-06-26_13:16:16\",\n \"commit\": \"8589ad58f4ac30a198c101a7b8aa7ac2c54b2d05\",\n \"description\": \"things service\",\n \"status\": \"pass\",\n \"version\": \"0.13.0\"\n}\n
"},{"location":"cli/#users-management","title":"Users management","text":""},{"location":"cli/#create-user","title":"Create User","text":"Mainflux has two options for user creation. Either the <user_token>
is provided or not. If the <user_token>
is provided then the created user will be owned by the user identified by the <user_token>
. Otherwise, when the token is not used, since everybody can create new users, the user will not have an owner. However, the token is still required, in order to be consistent. For more details, please see Authorization page.
mainflux-cli users create <user_name> <user_email> <user_password>\n\nmainflux-cli users create <user_name> <user_email> <user_password> <user_token>\n
"},{"location":"cli/#login-user","title":"Login User","text":"mainflux-cli users token <user_email> <user_password>\n
"},{"location":"cli/#get-user-token-from-refresh-token","title":"Get User Token From Refresh Token","text":"mainflux-cli users refreshtoken <refresh_token>\n
"},{"location":"cli/#get-user","title":"Get User","text":"mainflux-cli users get <user_id> <user_token>\n
"},{"location":"cli/#get-users","title":"Get Users","text":"mainflux-cli users get all <user_token>\n
"},{"location":"cli/#update-user-metadata","title":"Update User Metadata","text":"mainflux-cli users update <user_id> '{\"name\":\"value1\", \"metadata\":{\"value2\": \"value3\"}}' <user_token>\n
"},{"location":"cli/#update-user-tags","title":"Update User Tags","text":"mainflux-cli users update tags <user_id> '[\"tag1\", \"tag2\"]' <user_token>\n
"},{"location":"cli/#update-user-identity","title":"Update User Identity","text":"mainflux-cli users update identity <user_id> <user_email> <user_token>\n
"},{"location":"cli/#update-user-owner","title":"Update User Owner","text":"mainflux-cli users update owner <user_id> <owner_id> <user_token>\n
"},{"location":"cli/#update-user-password","title":"Update User Password","text":"mainflux-cli users password <old_password> <password> <user_token>\n
"},{"location":"cli/#enable-user","title":"Enable User","text":"mainflux-cli users enable <user_id> <user_token>\n
"},{"location":"cli/#disable-user","title":"Disable User","text":"mainflux-cli users disable <user_id> <user_token>\n
"},{"location":"cli/#get-profile-of-the-user-identified-by-the-token","title":"Get Profile of the User identified by the token","text":"mainflux-cli users profile <user_token>\n
"},{"location":"cli/#groups-management","title":"Groups management","text":""},{"location":"cli/#create-group","title":"Create Group","text":"mainflux-cli groups create '{\"name\":\"<group_name>\",\"description\":\"<description>\",\"parentID\":\"<parent_id>\",\"metadata\":\"<metadata>\"}' <user_token>\n
"},{"location":"cli/#get-group","title":"Get Group","text":"mainflux-cli groups get <group_id> <user_token>\n
"},{"location":"cli/#get-groups","title":"Get Groups","text":"mainflux-cli groups get all <user_token>\n
"},{"location":"cli/#update-group","title":"Update Group","text":"mainflux-cli groups update '{\"id\":\"<group_id>\",\"name\":\"<group_name>\",\"description\":\"<description>\",\"metadata\":\"<metadata>\"}' <user_token>\n
"},{"location":"cli/#get-group-members","title":"Get Group Members","text":"mainflux-cli groups members <group_id> <user_token>\n
"},{"location":"cli/#get-memberships","title":"Get Memberships","text":"mainflux-cli groups membership <member_id> <user_token>\n
"},{"location":"cli/#assign-members-to-group","title":"Assign Members to Group","text":"mainflux-cli groups assign <member_ids> <member_type> <group_id> <user_token>\n
"},{"location":"cli/#unassign-members-to-group","title":"Unassign Members to Group","text":"mainflux-cli groups unassign <member_ids> <group_id> <user_token>\n
"},{"location":"cli/#enable-group","title":"Enable Group","text":"mainflux-cli groups enable <group_id> <user_token>\n
"},{"location":"cli/#disable-group","title":"Disable Group","text":"mainflux-cli groups disable <group_id> <user_token>\n
"},{"location":"cli/#things-management","title":"Things management","text":""},{"location":"cli/#create-thing","title":"Create Thing","text":"mainflux-cli things create '{\"name\":\"myThing\"}' <user_token>\n
"},{"location":"cli/#create-thing-with-metadata","title":"Create Thing with metadata","text":"mainflux-cli things create '{\"name\":\"myThing\", \"metadata\": {\"key1\":\"value1\"}}' <user_token>\n
"},{"location":"cli/#bulk-provision-things","title":"Bulk Provision Things","text":"mainflux-cli provision things <file> <user_token>\n
file
- A CSV or JSON file containing thing names (must have extension .csv
or .json
)user_token
- A valid user auth token for the current systemAn example CSV file might be:
thing1,\nthing2,\nthing3,\n
in which the first column is thing names.
A comparable JSON file would be
[\n {\n \"name\": \"<thing1_name>\",\n \"status\": \"enabled\"\n },\n {\n \"name\": \"<thing2_name>\",\n \"status\": \"disabled\"\n },\n {\n \"name\": \"<thing3_name>\",\n \"status\": \"enabled\",\n \"credentials\": {\n \"identity\": \"<thing3_identity>\",\n \"secret\": \"<thing3_secret>\"\n }\n }\n]\n
With JSON you can be able to specify more fields of the channels you want to create
"},{"location":"cli/#update-thing","title":"Update Thing","text":"mainflux-cli things update <thing_id> '{\"name\":\"value1\", \"metadata\":{\"key1\": \"value2\"}}' <user_token>\n
"},{"location":"cli/#update-thing-tags","title":"Update Thing Tags","text":"mainflux-cli things update tags <thing_id> '[\"tag1\", \"tag2\"]' <user_token>\n
"},{"location":"cli/#update-thing-owner","title":"Update Thing Owner","text":"mainflux-cli things update owner <thing_id> <owner_id> <user_token>\n
"},{"location":"cli/#update-thing-secret","title":"Update Thing Secret","text":"mainflux-cli things update secret <thing_id> <secet> <user_token>\n
"},{"location":"cli/#identify-thing","title":"Identify Thing","text":"mainflux-cli things identify <thing_secret>\n
"},{"location":"cli/#enable-thing","title":"Enable Thing","text":"mainflux-cli things enable <thing_id> <user_token>\n
"},{"location":"cli/#disable-thing","title":"Disable Thing","text":"mainflux-cli things disable <thing_id> <user_token>\n
"},{"location":"cli/#get-thing","title":"Get Thing","text":"mainflux-cli things get <thing_id> <user_token>\n
"},{"location":"cli/#get-things","title":"Get Things","text":"mainflux-cli things get all <user_token>\n
"},{"location":"cli/#get-a-subset-list-of-provisioned-things","title":"Get a subset list of provisioned Things","text":"mainflux-cli things get all --offset=1 --limit=5 <user_token>\n
"},{"location":"cli/#share-thing","title":"Share Thing","text":"mainflux-cli things share <channel_id> <user_id> <allowed_actions> <user_token>\n
"},{"location":"cli/#channels-management","title":"Channels management","text":""},{"location":"cli/#create-channel","title":"Create Channel","text":"mainflux-cli channels create '{\"name\":\"myChannel\"}' <user_token>\n
"},{"location":"cli/#bulk-provision-channels","title":"Bulk Provision Channels","text":"mainflux-cli provision channels <file> <user_token>\n
file
- A CSV or JSON file containing channel names (must have extension .csv
or .json
)user_token
- A valid user auth token for the current systemAn example CSV file might be:
<channel1_name>,\n<channel2_name>,\n<channel3_name>,\n
in which the first column is channel names.
A comparable JSON file would be
[\n {\n \"name\": \"<channel1_name>\",\n \"description\": \"<channel1_description>\",\n \"status\": \"enabled\"\n },\n {\n \"name\": \"<channel2_name>\",\n \"description\": \"<channel2_description>\",\n \"status\": \"disabled\"\n },\n {\n \"name\": \"<channel3_name>\",\n \"description\": \"<channel3_description>\",\n \"status\": \"enabled\"\n }\n]\n
With JSON you can be able to specify more fields of the channels you want to create
"},{"location":"cli/#update-channel","title":"Update Channel","text":"mainflux-cli channels update '{\"id\":\"<channel_id>\",\"name\":\"myNewName\"}' <user_token>\n
"},{"location":"cli/#enable-channel","title":"Enable Channel","text":"mainflux-cli channels enable <channel_id> <user_token>\n
"},{"location":"cli/#disable-channel","title":"Disable Channel","text":"mainflux-cli channels disable <channel_id> <user_token>\n
"},{"location":"cli/#get-channel","title":"Get Channel","text":"mainflux-cli channels get <channel_id> <user_token>\n
"},{"location":"cli/#get-channels","title":"Get Channels","text":"mainflux-cli channels get all <user_token>\n
"},{"location":"cli/#get-a-subset-list-of-provisioned-channels","title":"Get a subset list of provisioned Channels","text":"mainflux-cli channels get all --offset=1 --limit=5 <user_token>\n
"},{"location":"cli/#connect-thing-to-channel","title":"Connect Thing to Channel","text":"mainflux-cli things connect <thing_id> <channel_id> <user_token>\n
"},{"location":"cli/#bulk-connect-things-to-channels","title":"Bulk Connect Things to Channels","text":"mainflux-cli provision connect <file> <user_token>\n
file
- A CSV or JSON file containing thing and channel ids (must have extension .csv
or .json
)user_token
- A valid user auth token for the current systemAn example CSV file might be
<thing_id1>,<channel_id1>\n<thing_id2>,<channel_id2>\n
in which the first column is thing IDs and the second column is channel IDs. A connection will be created for each thing to each channel. This example would result in 4 connections being created.
A comparable JSON file would be
{\n \"subjects\": [\"<thing_id1>\", \"<thing_id2>\"],\n \"objects\": [\"<channel_id1>\", \"<channel_id2>\"]\n}\n
"},{"location":"cli/#disconnect-thing-from-channel","title":"Disconnect Thing from Channel","text":"mainflux-cli things disconnect <thing_id> <channel_id> <user_token>\n
"},{"location":"cli/#get-a-subset-list-of-channels-connected-to-thing","title":"Get a subset list of Channels connected to Thing","text":"mainflux-cli things connections <thing_id> <user_token>\n
"},{"location":"cli/#get-a-subset-list-of-things-connected-to-channel","title":"Get a subset list of Things connected to Channel","text":"mainflux-cli channels connections <channel_id> <user_token>\n
"},{"location":"cli/#messaging","title":"Messaging","text":""},{"location":"cli/#send-a-message-over-http","title":"Send a message over HTTP","text":"mainflux-cli messages send <channel_id> '[{\"bn\":\"Dev1\",\"n\":\"temp\",\"v\":20}, {\"n\":\"hum\",\"v\":40}, {\"bn\":\"Dev2\", \"n\":\"temp\",\"v\":20}, {\"n\":\"hum\",\"v\":40}]' <thing_secret>\n
"},{"location":"cli/#read-messages-over-http","title":"Read messages over HTTP","text":"mainflux-cli messages read <channel_id> <user_token> -R <reader_url>\n
"},{"location":"cli/#bootstrap","title":"Bootstrap","text":""},{"location":"cli/#add-configuration","title":"Add configuration","text":"mainflux-cli bootstrap create '{\"external_id\": \"myExtID\", \"external_key\": \"myExtKey\", \"name\": \"myName\", \"content\": \"myContent\"}' <user_token> -b <bootstrap-url>\n
"},{"location":"cli/#view-configuration","title":"View configuration","text":"mainflux-cli bootstrap get <thing_id> <user_token> -b <bootstrap-url>\n
"},{"location":"cli/#update-configuration","title":"Update configuration","text":"mainflux-cli bootstrap update '{\"mainflux_id\":\"<thing_id>\", \"name\": \"newName\", \"content\": \"newContent\"}' <user_token> -b <bootstrap-url>\n
"},{"location":"cli/#remove-configuration","title":"Remove configuration","text":"mainflux-cli bootstrap remove <thing_id> <user_token> -b <bootstrap-url>\n
"},{"location":"cli/#bootstrap-configuration","title":"Bootstrap configuration","text":"mainflux-cli bootstrap bootstrap <external_id> <external_key> -b <bootstrap-url>\n
"},{"location":"cli/#config","title":"Config","text":"Mainflux CLI tool supports configuration files that contain some of the basic settings so you don't have to specify them through flags. Once you set the settings, they remain stored locally.
mainflux-cli config <parameter> <value>\n
Response should look like this:
ok \n
This command is used to set the flags to be used by CLI in a local TOML file. The default location of the TOML file is in the same directory as the CLI binary. To change the location of the TOML file you can run the command:
mainflux-cli config <parameter> <value> -c \"cli/file_name.toml\"\n
The possible parameters that can be set using the config command are:
Flag Description Default bootstrap_url Bootstrap service URL \"http://localhost:9013\" certs_url Certs service URL \"http://localhost:9019\" http_adapter_url HTTP adapter URL \"http://localhost/http\" msg_content_type Message content type \"application/senml+json\" reader_url Reader URL \"http://localhost\" things_url Things service URL \"http://localhost:9000\" tls_verification Do not check for TLS cert users_url Users service URL \"http://localhost:9002\" state Bootstrap state query parameter status User status query parameter topic Subscription topic query parameter contact Subscription contact query parameter email User email query parameter limit Limit query parameter 10 metadata Metadata query parameter name Name query parameter offset Offset query parameter raw_output Enables raw output mode for easier parsing of output"},{"location":"dev-guide/","title":"Developer's guide","text":""},{"location":"dev-guide/#getting-mainflux","title":"Getting Mainflux","text":"Mainflux source can be found in the official Mainflux GitHub repository. You should fork this repository in order to make changes to the project. The forked version of the repository should be cloned using the following:
git clone <forked repository> $SOMEPATH/mainflux\ncd $SOMEPATH/mainflux\n
Note: If your $SOMEPATH
is equal to $GOPATH/src/github.com/mainflux/mainflux
, make sure that your $GOROOT
and $GOPATH
do not overlap (otherwise, go modules won't work).
Make sure that you have Protocol Buffers (version 21.12) compiler (protoc
) installed.
Go Protobuf installation instructions are here. Go Protobuf uses C bindings, so you will need to install C++ protobuf as a prerequisite. Mainflux uses Protocol Buffers for Go with Gadgets
to generate faster marshaling and unmarshaling Go code. Protocol Buffers for Go with Gadgets installation instructions can be found here.
A copy of Go (version 1.19.4) and docker template (version 3.7) will also need to be installed on your system.
If any of these versions seem outdated, the latest can always be found in our CI script.
"},{"location":"dev-guide/#build-all-services","title":"Build All Services","text":"Use the GNU Make tool to build all Mainflux services:
make\n
Build artifacts will be put in the build
directory.
N.B. All Mainflux services are built as a statically linked binaries. This way they can be portable (transferred to any platform just by placing them there and running them) as they contain all needed libraries and do not relay on shared system libraries. This helps creating FROM scratch dockers.
"},{"location":"dev-guide/#build-individual-microservice","title":"Build Individual Microservice","text":"Individual microservices can be built with:
make <microservice_name>\n
For example:
make http\n
will build the HTTP Adapter microservice.
"},{"location":"dev-guide/#building-dockers","title":"Building Dockers","text":"Dockers can be built with:
make dockers\n
or individually with:
make docker_<microservice_name>\n
For example:
make docker_http\n
N.B. Mainflux creates FROM scratch
docker containers which are compact and small in size.
N.B. The things-db
and users-db
containers are built from a vanilla PostgreSQL docker image downloaded from docker hub which does not persist the data when these containers are rebuilt. Thus, rebuilding of all docker containers with make dockers
or rebuilding the things-db
and users-db
containers separately with make docker_things-db
and make docker_users-db
respectively, will cause data loss. All your users, things, channels and connections between them will be lost! As we use this setup only for development, we don't guarantee any permanent data persistence. Though, in order to enable data retention, we have configured persistent volumes for each container that stores some data. If you want to update your Mainflux dockerized installation and want to keep your data, use make cleandocker
to clean the containers and images and keep the data (stored in docker persistent volumes) and then make run
to update the images and the containers. Check the Cleaning up your dockerized Mainflux setup section for details. Please note that this kind of updating might not work if there are database changes.
In order to speed up build process, you can use commands such as:
make dockers_dev\n
or individually with
make docker_dev_<microservice_name>\n
Commands make dockers
and make dockers_dev
are similar. The main difference is that building images in the development mode is done on the local machine, rather than an intermediate image, which makes building images much faster. Before running this command, corresponding binary needs to be built in order to make changes visible. This can be done using make
or make <service_name>
command. Commands make dockers_dev
and make docker_dev_<service_name>
should be used only for development to speed up the process of image building. For deployment images, commands from section above should be used.
When the project is first cloned to your system, you will need to make sure and build all of the Mainflux services.
make\nmake dockers_dev\n
As you develop and test changes, only the services related to your changes will need to be rebuilt. This will reduce compile time and create a much more enjoyable development experience.
make <microservice_name>\nmake docker_dev_<microservice_name>\nmake run\n
"},{"location":"dev-guide/#overriding-the-default-docker-compose-configuration","title":"Overriding the default docker-compose configuration","text":"Sometimes, depending on the use case and the user's needs it might be useful to override or add some extra parameters to the docker-compose configuration. These configuration changes can be done by specifying multiple compose files with the docker-compose command line option -f as described here. The following format of the docker-compose
command can be used to extend or override the configuration:
docker-compose -f docker/docker-compose.yml -f docker/docker-compose.custom1.yml -f docker/docker-compose.custom2.yml up [-d]\n
In the command above each successive file overrides the previous parameters.
A practical example in our case would be to enable debugging and tracing in NATS so that we can see better how are the messages moving around.
docker-compose.nats-debugging.yml
version: \"3\"\n\nservices:\n nats:\n command: --debug -DV\n
When we have the override files in place, to compose the whole infrastructure including the persistent volumes we can execute:
docker-compose -f docker/docker-compose.yml -f docker/docker-compose.nats-debugging.yml up -d\n
Note: Please store your customizations to some folder outside the Mainflux's source folder and maybe add them to some other git repository. You can always apply your customizations by pointing to the right file using docker-compose -f ...
.
If you want to clean your whole dockerized Mainflux installation you can use the make pv=true cleandocker
command. Please note that by default the make cleandocker
command will stop and delete all of the containers and images, but NOT DELETE persistent volumes. If you want to delete the gathered data in the system (the persistent volumes) please use the following command make pv=true cleandocker
(pv = persistent volumes). This form of the command will stop and delete the containers, the images and will also delete the persistent volumes.
The MQTT Microservice in Mainflux is special, as it is currently the only microservice written in NodeJS. It is not compiled, but node modules need to be downloaded in order to start the service:
cd mqtt\nnpm install\n
Note that there is a shorthand for doing these commands with make
tool:
make mqtt\n
After that, the MQTT Adapter can be started from top directory (as it needs to find *.proto
files) with:
node mqtt/mqtt.js\n
"},{"location":"dev-guide/#troubleshooting","title":"Troubleshooting","text":"Depending on your use case, MQTT topics, message size, the number of clients and the frequency with which the messages are sent it can happen that you experience some problems.
Up until now it has been noticed that in case of high load, big messages and many clients it can happen that the MQTT microservice crashes with the following error:
mainflux-mqtt | FATAL ERROR: CALL_AND_RETRY_LAST Allocation failed - JavaScript heap out of memory\nmainflux-mqtt exited with code 137\n
This problem is caused the default allowed memory in node (V8). V8 gives the user 1.7GB per default. To fix the problem you should add the following environment variable NODE_OPTIONS:--max-old-space-size=SPACE_IN_MB
in the environment section of the aedes.yml configuration. To find the right value for the --max-old-space-size
parameter you'll have to experiment a bit depending on your needs.
The Mainflux MQTT service uses the Aedes MQTT Broker for implementation of the MQTT related things. Therefore, for some questions or problems you can also check out the Aedes's documentation or reach out its contributors.
"},{"location":"dev-guide/#protobuf","title":"Protobuf","text":"If you've made any changes to .proto
files, you should call protoc
command prior to compiling individual microservices.
To do this by hand, execute:
protoc -I. --go_out=. --go_opt=paths=source_relative pkg/messaging/*.proto\nprotoc -I. --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative users/policies/*.proto\nprotoc -I. --go_out=. --go_opt=paths=source_relative --go-grpc_out=. --go-grpc_opt=paths=source_relative things/policies/*.proto\n
A shorthand to do this via make
tool is:
make proto\n
N.B. This must be done once at the beginning in order to generate protobuf Go structures needed for the build. However, if you don't change any of .proto
files, this step is not mandatory, since all generated files are included in the repository (those are files with .pb.go
extension).
Mainflux can be compiled for ARM platform and run on Raspberry Pi or other similar IoT gateways, by following the instructions here or here as well as information found here. The environment variables GOARCH=arm
and GOARM=7
must be set for the compilation.
Cross-compilation for ARM with Mainflux make:
GOOS=linux GOARCH=arm GOARM=7 make\n
"},{"location":"dev-guide/#running-tests","title":"Running tests","text":"To run all of the tests you can execute:
make test\n
Dockertest is used for the tests, so to run them, you will need the Docker daemon/service running.
"},{"location":"dev-guide/#installing","title":"Installing","text":"Installing Go binaries is simple: just move them from build
to $GOBIN
(do not fortget to add $GOBIN
to your $PATH
).
You can execute:
make install\n
which will do this copying of the binaries.
N.B. Only Go binaries will be installed this way. The MQTT adapter is a NodeJS script and will stay in the mqtt
dir.
Mainflux depends on several infrastructural services, notably the default message broker, NATS and PostgreSQL database.
"},{"location":"dev-guide/#message-broker","title":"Message Broker","text":"Mainflux uses NATS as it's default central message bus. For development purposes (when not run via Docker), it expects that NATS is installed on the local system.
To do this execute:
go install github.com/nats-io/nats-server/v2@latest\n
This will install nats-server
binary that can be simply run by executing:
nats-server\n
If you want to change the default message broker to RabbitMQ, VerneMQ or Kafka you need to install it on the local system. To run using a different broker you need to set the MF_BROKER_TYPE
env variable to nats
, rabbitmq
or vernemq
during make and run process.
MF_BROKER_TYPE=<broker-type> make\nMF_BROKER_TYPE=<broker-type> make run\n
"},{"location":"dev-guide/#postgresql","title":"PostgreSQL","text":"Mainflux uses PostgreSQL to store metadata (users
, things
and channels
entities alongside with authorization tokens). It expects that PostgreSQL DB is installed, set up and running on the local system.
Information how to set-up (prepare) PostgreSQL database can be found here, and it is done by executing following commands:
# Create `users` and `things` databases\nsudo -u postgres createdb users\nsudo -u postgres createdb things\n\n# Set-up Postgres roles\nsudo su - postgres\npsql -U postgres\npostgres=# CREATE ROLE mainflux WITH LOGIN ENCRYPTED PASSWORD 'mainflux';\npostgres=# ALTER USER mainflux WITH LOGIN ENCRYPTED PASSWORD 'mainflux';\n
"},{"location":"dev-guide/#mainflux-services","title":"Mainflux Services","text":"Running of the Mainflux microservices can be tricky, as there is a lot of them and each demand configuration in the form of environment variables.
The whole system (set of microservices) can be run with one command:
make rundev\n
which will properly configure and run all microservices.
Please assure that MQTT microservice has node_modules
installed, as explained in MQTT Microservice chapter.
N.B. make rundev
actually calls helper script scripts/run.sh
, so you can inspect this script for the details.
Mainflux IoT platform provides services for supporting management of devices on the edge. Typically, IoT solution includes devices (sensors/actuators) deployed in far edge and connected through some proxy gateway. Although most devices could be connected to the Mainflux directly, using gateways decentralizes system, decreases load on the cloud and makes setup less difficult. Also, gateways can provide additional data processing, filtering and storage.
Services that can be used on gateway to enable data and control plane for edge:
Figure shows edge gateway that is running Agent, Export and minimal deployment of Mainflux services. Mainflux services enable device management and MQTT protocol, NATS being a central message bus as it is the default message broker in Mainflux becomes also central message bus for other services like Agent
and Export
as well as for any new custom developed service that can be built to interface with devices with any of hardware supported interfaces on the gateway, those services would publish data to the message broker where Export
service can pick them up and send to cloud.
Agent can be used to control deployed services as well as to monitor their liveliness through subcribing to heartbeat
Message Broker subject where services should publish their liveliness status, like Export
service does.
Agent is service that is used to manage gateways that are connected to Mainflux in cloud. It provides a way to send commands to gateway and receive response via mqtt. There are two types of channels used for Agent data
and control
. Over the control
we are sending commands and receiving response from commands. Data collected from sensors connected to gateway are being sent over data
channel. Agent is able to configure itself provided that bootstrap server is running, it will retrieve configuration from bootstrap server provided few arguments - external_id
and external_key
see bootstraping.
Agent service has following features:
bash
managed by Agent
heartbeat.>
it can remotely provide info on running services, if services are publishing heartbeat ( like Export)Before running agent we need to provision a thing and DATA and CONTROL channel. Thing that will be used as gateway representation and make bootstrap configuration. If using Mainflux UI this is done automatically when adding gateway through UI. Gateway can be provisioned with provision
service.
When you provisioned gateway as described in provision you can check results
curl -s -S -X GET http://mainflux-domain.com:9013/things/bootstrap/<external_id> -H \"Authorization: Thing <external_key>\" -H 'Content-Type: application/json' |jq\n
{\n \"thing_id\": \"e22c383a-d2ab-47c1-89cd-903955da993d\",\n \"thing_key\": \"fc987711-1828-461b-aa4b-16d5b2c642fe\",\n \"channels\": [\n {\n \"id\": \"fa5f9ba8-a1fc-4380-9edb-d0c23eaa24ec\",\n \"name\": \"control-channel\",\n \"metadata\": {\n \"type\": \"control\"\n }\n },\n {\n \"id\": \"24e5473e-3cbe-43d9-8a8b-a725ff918c0e\",\n \"name\": \"data-channel\",\n \"metadata\": {\n \"type\": \"data\"\n }\n },\n {\n \"id\": \"1eac45c2-0f72-4089-b255-ebd2e5732bbb\",\n \"name\": \"export-channel\",\n \"metadata\": {\n \"type\": \"export\"\n }\n }\n ],\n \"content\": \"{\\\"agent\\\":{\\\"edgex\\\":{\\\"url\\\":\\\"http://localhost:48090/api/v1/\\\"},\\\"heartbeat\\\":{\\\"interval\\\":\\\"30s\\\"},\\\"log\\\":{\\\"level\\\":\\\"debug\\\"},\\\"mqtt\\\":{\\\"mtls\\\":false,\\\"qos\\\":0,\\\"retain\\\":false,\\\"skip_tls_ver\\\":true,\\\"url\\\":\\\"tcp://mainflux-domain.com:1883\\\"},\\\"server\\\":{\\\"nats_url\\\":\\\"localhost:4222\\\",\\\"port\\\":\\\"9000\\\"},\\\"terminal\\\":{\\\"session_timeout\\\":\\\"30s\\\"}},\\\"export\\\":{\\\"exp\\\":{\\\"cache_db\\\":\\\"0\\\",\\\"cache_pass\\\":\\\"\\\",\\\"cache_url\\\":\\\"localhost:6379\\\",\\\"log_level\\\":\\\"debug\\\",\\\"nats\\\":\\\"nats://localhost:4222\\\",\\\"port\\\":\\\"8172\\\"},\\\"mqtt\\\":{\\\"ca_path\\\":\\\"ca.crt\\\",\\\"cert_path\\\":\\\"thing.crt\\\",\\\"channel\\\":\\\"\\\",\\\"host\\\":\\\"tcp://mainflux-domain.com:1883\\\",\\\"mtls\\\":false,\\\"password\\\":\\\"\\\",\\\"priv_key_path\\\":\\\"thing.key\\\",\\\"qos\\\":0,\\\"retain\\\":false,\\\"skip_tls_ver\\\":false,\\\"username\\\":\\\"\\\"},\\\"routes\\\":[{\\\"mqtt_topic\\\":\\\"\\\",\\\"nats_topic\\\":\\\"channels\\\",\\\"subtopic\\\":\\\"\\\",\\\"type\\\":\\\"mfx\\\",\\\"workers\\\":10},{\\\"mqtt_topic\\\":\\\"\\\",\\\"nats_topic\\\":\\\"export\\\",\\\"subtopic\\\":\\\"\\\",\\\"type\\\":\\\"default\\\",\\\"workers\\\":10}]}}\"\n}\n
external_id
is usually MAC address, but anything that suits applications requirements can be usedexternal_key
is key that will be provided to agent processthing_id
is mainflux thing idchannels
is 2-element array where first channel is CONTROL and second is DATA, both channels should be assigned to thingcontent
is used for configuring parameters of agent and export service.Then to start the agent service you can do it like this
git clone https://github.com/mainflux/agent\nmake\ncd build\n\nMF_AGENT_LOG_LEVEL=debug \\\nMF_AGENT_BOOTSTRAP_KEY=edged \\\nMF_AGENT_BOOTSTRAP_ID=34:e1:2d:e6:cf:03 ./mainflux-agent\n\n{\"level\":\"info\",\"message\":\"Requesting config for 34:e1:2d:e6:cf:03 from http://localhost:9013/things/bootstrap\",\"ts\":\"2019-12-05T04:47:24.98411512Z\"}\n{\"level\":\"info\",\"message\":\"Getting config for 34:e1:2d:e6:cf:03 from http://localhost:9013/things/bootstrap succeeded\",\"ts\":\"2019-12-05T04:47:24.995465239Z\"}\n{\"level\":\"info\",\"message\":\"Connected to MQTT broker\",\"ts\":\"2019-12-05T04:47:25.009645082Z\"}\n{\"level\":\"info\",\"message\":\"Agent service started, exposed port 9000\",\"ts\":\"2019-12-05T04:47:25.009755345Z\"}\n{\"level\":\"info\",\"message\":\"Subscribed to MQTT broker\",\"ts\":\"2019-12-05T04:47:25.012930443Z\"}\n
MF_AGENT_BOOTSTRAP_KEY
- is external_key
in bootstrap configuration.MF_AGENT_BOOSTRAP_ID
- is external_id
in bootstrap configuration.# Set connection parameters as environment variables in shell\nCH=`curl -s -S -X GET http://some-domain-name:9013/things/bootstrap/34:e1:2d:e6:cf:03 -H \"Authorization: Thing <BOOTSTRAP_KEY>\" -H 'Content-Type: application/json' | jq -r '.mainflux_channels[0].id'`\nTH=`curl -s -S -X GET http://some-domain-name:9013/things/bootstrap/34:e1:2d:e6:cf:03 -H \"Authorization: Thing <BOOTSTRAP_KEY>\" -H 'Content-Type: application/json' | jq -r .mainflux_id`\nKEY=`curl -s -S -X GET http://some-domain-name:9013/things/bootstrap/34:e1:2d:e6:cf:03 -H \"Authorization: Thing <BOOTSTRAP_KEY>\" -H 'Content-Type: application/json' | jq -r .mainflux_key`\n\n# Subscribe for response\nmosquitto_sub -d -u $TH -P $KEY -t \"channels/${CH}/messages/res/#\" -h some-domain-name -p 1883\n\n# Publish command e.g `ls`\nmosquitto_pub -d -u $TH -P $KEY -t channels/$CH/messages/req -h some-domain-name -p 1883 -m '[{\"bn\":\"1:\", \"n\":\"exec\", \"vs\":\"ls, -l\"}]'\n
"},{"location":"edge/#remote-terminal","title":"Remote terminal","text":"This can be checked from the UI, click on the details for gateway and below the gateway parameters you will se box with prompt, if agent
is running and it is properly connected you should be able to execute commands remotely.
If there are services that are running on same gateway as agent
and they are publishing heartbeat to the Message Broker subject heartbeat.service_name.service
You can get the list of services by sending following mqtt message
# View services that are sending heartbeat\nmosquitto_pub -d -u $TH -P $KEY -t channels/$CH/messages/req -h some-domain-name -p 1883 -m '[{\"bn\":\"1:\", \"n\":\"service\", \"vs\":\"view\"}]'\n
Response can be observed on channels/$CH/messages/res/#
You can send commands to services running on the same edge gateway as Agent if they are subscribed on same the Message Broker server and correct subject.
Service commands are being sent via MQTT to topic:
channels/<control_channel_id>/messages/services/<service_name>/<subtopic>
when messages is received Agent forwards them to the Message Broker on subject:
commands.<service_name>.<subtopic>
Payload is up to the application and service itself.
"},{"location":"edge/#edgex","title":"EdgeX","text":"Edgex control messages are sent and received over control channel. MF sends a control SenML of the following form:
[{\"bn\":\"<uuid>:\", \"n\":\"control\", \"vs\":\"<cmd>, <param>, edgexsvc1, edgexsvc2, \u2026, edgexsvcN\"}}]\n
For example,
[{\"bn\":\"1:\", \"n\":\"control\", \"vs\":\"operation, stop, edgex-support-notifications, edgex-core-data\"}]\n
Agent, on the other hand, returns a response SenML of the following form:
[{\"bn\":\"<uuid>:\", \"n\":\"<>\", \"v\":\"<RESP>\"}]\n
"},{"location":"edge/#remote-commands","title":"Remote Commands","text":"EdgeX defines SMA commands in the following RAML file
Commands are:
mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages/req -h localhost -m '[{\"bn\":\"1:\", \"n\":\"control\", \"vs\":\"edgex-operation, start, edgex-support-notifications, edgex-core-data\"}]'\n
"},{"location":"edge/#config","title":"Config","text":"mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages/req -h localhost -m '[{\"bn\":\"1:\", \"n\":\"control\", \"vs\":\"edgex-config, edgex-support-notifications, edgex-core-data\"}]'\n
"},{"location":"edge/#metrics","title":"Metrics","text":"mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages/req -h localhost -m '[{\"bn\":\"1:\", \"n\":\"control\", \"vs\":\"edgex-metrics, edgex-support-notifications, edgex-core-data\"}]'\n
If you subscribe to
mosquitto_sub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages/#\n
You can observe commands and response from commands executed against edgex
[{\"bn\":\"1:\", \"n\":\"control\", \"vs\":\"edgex-metrics, edgex-support-notifications, edgex-core-data\"}]\n[{\"bn\":\"1\",\"n\":\"edgex-metrics\",\"vs\":\"{\\\"Metrics\\\":{\\\"edgex-core-data\\\":{\\\"CpuBusyAvg\\\":15.568632467698606,\\\"Memory\\\":{\\\"Alloc\\\":2040136,\\\"Frees\\\":876344,\\\"LiveObjects\\\":15134,\\\"Mallocs\\\":891478,\\\"Sys\\\":73332984,\\\"TotalAlloc\\\":80657464}},\\\"edgex-support-notifications\\\":{\\\"CpuBusyAvg\\\":14.65381169745318,\\\"Memory\\\":{\\\"Alloc\\\":961784,\\\"Frees\\\":127430,\\\"LiveObjects\\\":6095,\\\"Mallocs\\\":133525,\\\"Sys\\\":72808696,\\\"TotalAlloc\\\":11665416}}}}\\n\"}]\n
"},{"location":"edge/#export","title":"Export","text":"Mainflux Export service can send message from one Mainflux cloud to another via MQTT, or it can send messages from edge gateway to Mainflux Cloud. Export service is subscribed to local message bus and connected to MQTT broker in the cloud. Messages collected on local message bus are redirected to the cloud. When connection is lost, if QoS2 is used, messages from the local bus are stored into file or in memory to be resent upon reconnection. Additonaly Export
service publishes liveliness status to Agent
via the Message Broker subject heartbeat.export.service
Get the code:
go get github.com/mainflux/export\ncd $GOPATH/github.com/mainflux/export\n
Make:
make\n
"},{"location":"edge/#usage","title":"Usage","text":"cd build\n./mainflux-export\n
"},{"location":"edge/#configuration","title":"Configuration","text":"By default Export
service looks for config file at ../configs/config.toml
if no env vars are specified.
[exp]\n log_level = \"debug\"\n nats = \"localhost:4222\"\n port = \"8170\"\n\n[mqtt]\n username = \"<thing_id>\"\n password = \"<thing_password>\"\n ca_path = \"ca.crt\"\n client_cert = \"\"\n client_cert_key = \"\"\n client_cert_path = \"thing.crt\"\n client_priv_key_path = \"thing.key\"\n mtls = \"false\"\n priv_key = \"thing.key\"\n retain = \"false\"\n skip_tls_ver = \"false\"\n url = \"tcp://mainflux.com:1883\"\n\n[[routes]]\n mqtt_topic = \"channel/<channel_id>/messages\"\n subtopic = \"subtopic\"\n nats_topic = \"export\"\n type = \"default\"\n workers = 10\n\n[[routes]]\n mqtt_topic = \"channel/<channel_id>/messages\"\n subtopic = \"subtopic\"\n nats_topic = \"channels\"\n type = \"mfx\"\n workers = 10\n
"},{"location":"edge/#environment-variables","title":"Environment variables","text":"Service will first look for MF_EXPORT_CONFIG_FILE
for configuration and if not found it will be configured with env variables and new config file specified with MF_EXPORT_CONFIG_FILE
(default value will be used if none specified) will be saved with values populated from env vars. The service is configured using the environment variables as presented in the table. Note that any unset variables will be replaced with their default values.
For values in environment variables to take effect make sure that there is no MF_EXPORT_CONFIG_FILE
file.
If you run with environment variables you can create config file:
MF_EXPORT_PORT=8178 \\\nMF_EXPORT_LOG_LEVEL=debug \\\nMF_EXPORT_MQTT_HOST=tcp://localhost:1883 \\\nMF_EXPORT_MQTT_USERNAME=<thing_id> \\\nMF_EXPORT_MQTT_PASSWORD=<thing_secret> \\\nMF_EXPORT_MQTT_CHANNEL=<channel_id> \\\nMF_EXPORT_MQTT_SKIP_TLS=true \\\nMF_EXPORT_MQTT_MTLS=false \\\nMF_EXPORT_MQTT_CA=ca.crt \\\nMF_EXPORT_MQTT_CLIENT_CERT=thing.crt \\\nMF_EXPORT_MQTT_CLIENT_PK=thing.key \\\nMF_EXPORT_CONFIG_FILE=export.toml \\\n../build/mainflux-export&\n
Values from environment variables will be used to populate export.toml
"},{"location":"edge/#http-port","title":"Http port","text":"port
- HTTP port where status of Export
service can be fetched.curl -X GET http://localhost:8170/health\n'{\"status\": \"pass\", \"version\":\"0.12.1\", \"commit\":\"57cca9677721025da055c47957fc3e869e0325aa\" , \"description\":\"export service\", \"build_time\": \"2022-01-19_10:13:17\"}'\n
"},{"location":"edge/#mqtt-connection","title":"MQTT connection","text":"To establish connection to MQTT broker following settings are needed:
username
- Mainflux password
- Mainflux url
- url of MQTT brokerAdditionally, you will need MQTT client certificates if you enable mTLS. To obtain certificates ca.crt
, thing.crt
and key thing.key
follow instructions here or here.
To setup MTLS
connection Export
service requires client certificate and mtls
in config or MF_EXPORT_MQTT_MTLS
must be set to true
. Client certificate can be provided in a file, client_cert_path
and client_cert_key_path
are used for specifying path to certificate files. If MTLS is used and no certificate file paths are specified then Export
will look in client_cert
and client_cert_key
of config file expecting certificate content stored as string.
Routes are being used for specifying which subscriber's topic(subject) goes to which publishing topic. Currently only MQTT is supported for publishing. To match Mainflux requirements mqtt_topic
must contain channel/<channel_id>/messages
, additional subtopics can be appended.
mqtt_topic
- channel/<channel_id>/messages/<custom_subtopic>
nats_topic
- Export
service will be subscribed to the Message Broker subject <nats_topic>.>
subtopic
- messages will be published to MQTT topic <mqtt_topic>/<subtopic>/<nats_subject>
, where dots in nats_subject are replaced with '/'workers
- specifies number of workers that will be used for message forwarding.type
- specifies message transformation:default
is for sending messages as they are received on the Message Broker with no transformation (so they should be in SenML or JSON format if we want to persist them in Mainflux in cloud). If you don't want to persist messages in Mainflux or you are not exporting to Mainflux cloud - message format can be anything that suits your application as message passes untransformed.mfx
is for messages that are being picked up on internal Mainflux Message Broker bus. When using Export
along with Mainflux deployed on gateway (Fig. 1) messages coming from MQTT broker that are published to the Message Broker bus are Mainflux message. Using mfx
type will extract payload and export
will publish it to mqtt_topic
. Extracted payload is SenML or JSON if we want to persist messages. nats_topic
in this case must be channels
, or if you want to pick messages from a specific channel in local Mainflux instance to be exported to cloud you can put channels.<local_mainflux_channel_id>
.Before running Export
service edit configs/config.toml
and provide username
, password
and url
username
- matches thing_id
in Mainflux cloud instancepassword
- matches thing_secret
channel
- MQTT part of the topic where to publish MQTT data (channel/<channel_id>/messages
is format of mainflux MQTT topic) and plays a part in authorization.If Mainflux and Export service are deployed on same gateway Export
can be configured to send messages from Mainflux internal Message Broker bus to Mainflux in a cloud. In order for Export
service to listen on Mainflux Message Broker deployed on the same machine Message Broker port must be exposed. Edit Mainflux docker-compose.yml. Default Message Broker, NATS, section must look like below:
nats:\n image: nats:2.2.4\n container_name: mainflux-nats\n restart: on-failure\n networks:\n - mainflux-base-net\n ports:\n - 4222:4222\n
"},{"location":"edge/#how-to-save-config-via-agent","title":"How to save config via agent","text":"Configuration file for Export
service can be sent over MQTT using Agent service.
mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<control_ch_id>/messages/req -h localhost -p 18831 -m \"[{\\\"bn\\\":\\\"1:\\\", \\\"n\\\":\\\"config\\\", \\\"vs\\\":\\\"save, export, <config_file_path>, <file_content_base64>\\\"}]\"\n
vs=\"save, export, config_file_path, file_content_base64\"
- vs determines where to save file and contains file content in base64 encoding payload:
b,_ := toml.Marshal(export.Config)\npayload := base64.StdEncoding.EncodeToString(b)\n
"},{"location":"edge/#using-configure-script","title":"Using configure script","text":"There is a configuration.sh
script in a scripts
directory that can be used for automatic configuration and start up of remotely deployed export
. For this to work it is presumed that mainflux-export
and scripts/export_start
are placed in executable path on remote device. Additionally this script requires that remote device is provisioned following the steps described for provision service.
To run it first edit script to set parameters
MTLS=false\nEXTERNAL_KEY='raspberry'\nEXTERNAL_ID='pi'\nMAINFLUX_HOST='mainflux.com'\nMAINFLUX_USER_EMAIL='edge@email.com'\nMAINFLUX_USER_PASSWORD='12345678'\n
EXTERNAL_KEY
and EXTERNAL_ID
are parameters posted to /mapping
endpoint of provision
service, MAINFLUX_HOST
is location of cloud instance of Mainflux that export
should connect to and MAINFLUX_USER_EMAIL
and MAINFLUX_USER_PASSWORD
are users credentials in the cloud.
The following are steps that are an example usage of Mainflux components to connect edge with cloud. We will start Mainflux in the cloud with additional services Bootstrap and Provision. Using Bootstrap and Provision we will create a configuration for use in gateway deployment. On the gateway we will start services Agent and Export using previously created configuration.
"},{"location":"edge/#services-in-the-cloud","title":"Services in the cloud","text":"Start the Mainflux:
docker-compose -f docker/docker-compose.yml up\n
Start the Bootstrap service:
docker-compose -f docker/addons/bootstrap/docker-compose.yml up\n
Start the Provision service
docker-compose -f docker/addons/provision/docker-compose.yml up\n
Create user:
mainflux-cli -m http://localhost:9002 users create test test@email.com 12345678\n
Obtain user token:
mainflux-cli -m http://localhost:9002 users token test@email.com 12345678\n\n{\n \"access_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY3NTEzNTIsImlhdCI6MTY4Njc1MDQ1MiwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoiYWNjZXNzIn0.AND1sm6mN2wgUxVkDhpipCoNa87KPMghGaS5-4dU0iZaqGIUhWScrEJwOahT9ts1TZSd1qEcANTIffJ_y2Pbsg\",\n \"refresh_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODY4MzY4NTIsImlhdCI6MTY4Njc1MDQ1MiwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI5NDkzOTE1OS1kMTI5LTRmMTctOWU0ZS1jYzJkNjE1NTM5ZDciLCJ0eXBlIjoicmVmcmVzaCJ9.z3OWCHhNHNuvkzBqEAoLKWS6vpFLkIYXhH9cZogSCXd109-BbKVlLvYKmja-hkhaj_XDJKySDN3voiazBr_WTA\",\n \"access_type\": \"Bearer\"\n}\n\nUSER_TOKEN=<access_token>\n
Provision a gateway:
curl -s -S -X POST http://localhost:9016/mapping -H \"Authorization: Bearer $USER_TOKEN\" -H 'Content-Type: application/json' -d '{\"name\":\"testing\", \"external_id\" : \"54:FG:66:DC:43\", \"external_key\":\"223334fw2\" }' | jq\n
{\n \"things\": [\n {\n \"id\": \"88529fb2-6c1e-4b60-b9ab-73b5d89f7404\",\n \"name\": \"thing\",\n \"key\": \"3529c1bb-7211-4d40-9cd8-b05833196093\",\n \"metadata\": {\n \"external_id\": \"54:FG:66:DC:43\"\n }\n }\n ],\n \"channels\": [\n {\n \"id\": \"1aa3f736-0bd3-44b5-a917-a72cc743f633\",\n \"name\": \"control-channel\",\n \"metadata\": {\n \"type\": \"control\"\n }\n },\n {\n \"id\": \"e2adcfa6-96b2-425d-8cd4-ff8cb9c056ce\",\n \"name\": \"data-channel\",\n \"metadata\": {\n \"type\": \"data\"\n }\n }\n ],\n \"whitelisted\": {\n \"88529fb2-6c1e-4b60-b9ab-73b5d89f7404\": true\n }\n}\n
Parameters and are representing the gateway. Provision
will use them to create a bootstrap configuration that will make a relation with Mainflux entities used for connection, authentication and authorization thing
and channel
. These parameters will be used by Agent
service on the gateway to retrieve that information and establish a connection with the cloud."},{"location":"edge/#services-on-the-edge","title":"Services on the Edge","text":""},{"location":"edge/#agent-service","title":"Agent service","text":"
Start the NATS and Agent service:
gnatsd\nMF_AGENT_BOOTSTRAP_ID=54:FG:66:DC:43 \\\nMF_AGENT_BOOTSTRAP_KEY=\"223334fw2\" \\\nMF_AGENT_BOOTSTRAP_URL=http://localhost:9013/things/bootstrap \\\nbuild/mainflux-agent\n{\"level\":\"info\",\"message\":\"Requesting config for 54:FG:66:DC:43 from http://localhost:9013/things/bootstrap\",\"ts\":\"2020-05-07T15:50:58.041145096Z\"}\n{\"level\":\"info\",\"message\":\"Getting config for 54:FG:66:DC:43 from http://localhost:9013/things/bootstrap succeeded\",\"ts\":\"2020-05-07T15:50:58.120779415Z\"}\n{\"level\":\"info\",\"message\":\"Saving export config file /configs/export/config.toml\",\"ts\":\"2020-05-07T15:50:58.121602229Z\"}\n{\"level\":\"warn\",\"message\":\"Failed to save export config file Error writing config file: open /configs/export/config.toml: no such file or directory\",\"ts\":\"2020-05-07T15:50:58.121752142Z\"}\n{\"level\":\"info\",\"message\":\"Client agent-88529fb2-6c1e-4b60-b9ab-73b5d89f7404 connected\",\"ts\":\"2020-05-07T15:50:58.128500603Z\"}\n{\"level\":\"info\",\"message\":\"Agent service started, exposed port 9003\",\"ts\":\"2020-05-07T15:50:58.128531057Z\"}\n
"},{"location":"edge/#export-service","title":"Export service","text":"git clone https://github.com/mainflux/export\nmake\n
Edit the configs/config.toml
setting
username
- thing from the results of provision request.password
- key from the results of provision request.mqtt_topic
- in routes set to channels/<channel_data_id>/messages
from results of provision.nats_topic
- whatever you need, export will subscribe to export.<nats_topic>
and forward messages to MQTT.host
- url of MQTT broker.[exp]\n cache_pass = \"\"\n cache_url = \"\"\n log_level = \"debug\"\n nats = \"localhost:4222\"\n port = \"8170\"\n\n[mqtt]\n ca_path = \"\"\n cert_path = \"\"\n host = \"tcp://localhost:1883\"\n mtls = false\n password = \"3529c1bb-7211-4d40-9cd8-b05833196093\"\n priv_key_path = \"\"\n qos = 0\n retain = false\n skip_tls_ver = false\n username = \"88529fb2-6c1e-4b60-b9ab-73b5d89f7404\"\n\n[[routes]]\n mqtt_topic = \"channels/e2adcfa6-96b2-425d-8cd4-ff8cb9c056ce/messages\"\n nats_topic = \">\"\n workers = 10\n
cd build\n./mainflux-export\n2020/05/07 17:36:57 Configuration loaded from file ../configs/config.toml\n{\"level\":\"info\",\"message\":\"Export service started, exposed port :8170\",\"ts\":\"2020-05-07T15:36:57.528398548Z\"}\n{\"level\":\"debug\",\"message\":\"Client export-88529fb2-6c1e-4b60-b9ab-73b5d89f7404 connected\",\"ts\":\"2020-05-07T15:36:57.528405818Z\"}\n
"},{"location":"edge/#testing-export","title":"Testing Export","text":"git clone https://github.com/mainflux/agent\ngo run ./examples/publish/main.go -s http://localhost:4222 export.test \"[{\\\"bn\\\":\\\"test\\\"}]\";\n
We have configured route for export, nats_topic = \">\"
means that it will listen to NATS
subject export.>
and mqtt_topic
is configured so that data will be sent to MQTT broker on topic channels/e2adcfa6-96b2-425d-8cd4-ff8cb9c056ce/messages
with appended NATS
subject. Other brokers can such as rabbitmq
can be used, for more detail refer to dev-guide.
In terminal where export is started you should see following message:
{\"level\":\"debug\",\"message\":\"Published to: export.test, payload: [{\\\"bn\\\":\\\"test\\\"}]\",\"ts\":\"2020-05-08T15:14:15.757298992Z\"}\n
In Mainflux mqtt
service:
mainflux-mqtt | {\"level\":\"info\",\"message\":\"Publish - client ID export-88529fb2-6c1e-4b60-b9ab-73b5d89f7404 to the topic: channels/e2adcfa6-96b2-425d-8cd4-ff8cb9c056ce/messages/export/test\",\"ts\":\"2020-05-08T15:16:02.999684791Z\"}\n
"},{"location":"entities/","title":"Entities","text":"Client is a component that will replace and unify the Mainflux Things and Users services. The purpose is to represent generic client accounts. Each client is identified using its identity and secret. The client will differ from Things service to Users service but we aim to achieve 1:1 implementation between the clients whilst changing how client secret works. This includes client secret generation, usage, modification and storage
"},{"location":"entities/#generic-client-entity","title":"Generic Client Entity","text":"The client entity is represented by the Client struct in Go. The fields of this struct are as follows:
// Credentials represent client credentials: its\n// \"identity\" which can be a username, email, generated name;\n// and \"secret\" which can be a password or access token.\ntype Credentials struct {\n Identity string `json:\"identity,omitempty\"` // username or generated login ID\n Secret string `json:\"secret\"` // password or token\n}\n\n// Client represents generic Client.\ntype Client struct {\n ID string `json:\"id\"`\n Name string `json:\"name,omitempty\"`\n Tags []string `json:\"tags,omitempty\"`\n Owner string `json:\"owner,omitempty\"` // nullable\n Credentials Credentials `json:\"credentials\"`\n Metadata Metadata `json:\"metadata,omitempty\"`\n CreatedAt time.Time `json:\"created_at\"`\n UpdatedAt time.Time `json:\"updated_at,omitempty\"`\n UpdatedBy string `json:\"updated_by,omitempty\"`\n Status Status `json:\"status\"` // 1 for enabled, 0 for disabled\n Role Role `json:\"role,omitempty\"` // 1 for admin, 0 for normal user\n}\n
ID
is a unique identifier for each client. It is a string value.Name
is an optional field that represents the name of the client.Tags
is an optional field that represents the tags related to the client. It is a slice of string values.Owner
is an optional field that represents the owner of the client.Credentials
is a struct that represents the client credentials. It contains two fields:Identity
This is the identity of the client, which can be a username, email, or generated name.Secret
This is the secret of the client, which can be a password, secret key, or access token.Metadata
is an optional field that is used for customized describing of the client.CreatedAt
is a field that represents the time when the client was created. It is a time.Time value.UpdatedAt
is a field that represents the time when the client was last updated. It is a time.Time value.UpdatedBy
is a field that represents the user who last updated the client.Status
is a field that represents the status for the client. It can be either 1 for enabled or 0 for disabled.Role
is an optional field that represents the role of the client. It can be either 1 for admin or 0 for the user.Currently, we have the things service and the users service as 2 deployments of the client entity. The things service is used to create, read, update, and delete things. The users service is used to create, read, update, and delete users. The client entity will be used to replace the things and users services. The client entity can be serialized to and from JSON format for communication with other services.
"},{"location":"entities/#users-service","title":"Users service","text":"For grouping Mainflux entities there are groups
object in the users
service. The users groups can be used for grouping users
only. Groups are organized like a tree, group can have one parent and children. Group with no parent is root of the tree.
In order to be easily integratable system, Mainflux is using Redis Streams as an event log for event sourcing. Services that are publishing events to Redis Streams are users
service, things
service, bootstrap
service and mqtt
adapter.
For every operation users
service will generate new event and publish it to Redis Stream called mainflux.users
. Every event has its own event ID that is automatically generated and operation
field that can have one of the following values:
user.create
for user creationuser.update
for user updateuser.remove
for user change of stateuser.view
for user viewuser.view_profile
for user profile viewuser.list
for listing usersuser.list_by_group
for listing users by groupuser.identify
for user identificationuser.generate_reset_token
for generating reset tokenuser.issue_token
for issuing tokenuser.refresh_token
for refreshing tokenuser.reset_secret
for resetting secretuser.send_password_reset
for sending password resetgroup.create
for group creationgroup.update
for group updategroup.remove
for group change of stategroup.view
for group viewgroup.list
for listing groupsgroup.list_by_user
for listing groups by userpolicy.authorize
for policy authorizationpolicy.add
for policy creationpolicy.update
for policy updatepolicy.remove
for policy deletionpolicy.list
for listing policiesBy fetching and processing these events you can reconstruct users
service state. If you store some of your custom data in metadata
field, this is the perfect way to fetch it and process it. If you want to integrate through docker-compose.yml you can use mainflux-es-redis
service. Just connect to it and consume events from Redis Stream named mainflux.users
.
Whenever user is created, users
service will generate new create
event. This event will have the following format:
1) \"1693307171926-0\"\n2) 1) \"occurred_at\"\n 2) \"1693307171925834295\"\n 3) \"operation\"\n 4) \"user.create\"\n 5) \"id\"\n 6) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 7) \"status\"\n 8) \"enabled\"\n 9) \"created_at\"\n 10) \"2023-08-29T11:06:11.914074Z\"\n 11) \"name\"\n 12) \"-dry-sun\"\n 13) \"metadata\"\n 14) \"{}\"\n 15) \"identity\"\n 16) \"-small-flower@email.com\"\n
As you can see from this example, every odd field represents field name while every even field represents field value. This is standard event format for Redis Streams. If you want to extract metadata
field from this event, you'll have to read it as string first and then you can deserialize it to some structured format.
Whenever user is viewed, users
service will generate new view
event. This event will have the following format:
1) \"1693307172248-0\"\n2) 1) \"name\"\n 2) \"-holy-pond\"\n 3) \"owner\"\n 4) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 5) \"created_at\"\n 6) \"2023-08-29T11:06:12.032254Z\"\n 7) \"status\"\n 8) \"enabled\"\n 9) \"operation\"\n 10) \"user.view\"\n 11) \"id\"\n 12) \"56d2a797-dcb9-4fab-baf9-7c75e707b2c0\"\n 13) \"identity\"\n 14) \"-snowy-wave@email.com\"\n 15) \"metadata\"\n 16) \"{}\"\n 17) \"occurred_at\"\n 18) \"1693307172247989798\"\n
"},{"location":"events/#user-view-profile-event","title":"User view profile event","text":"Whenever user profile is viewed, users
service will generate new view_profile
event. This event will have the following format:
1) \"1693308867001-0\"\n2) 1) \"id\"\n 2) \"64fd20bf-e8fb-46bf-9b64-2a6572eda21b\"\n 3) \"name\"\n 4) \"admin\"\n 5) \"identity\"\n 6) \"admin@example.com\"\n 7) \"metadata\"\n 8) \"{\\\"role\\\":\\\"admin\\\"}\"\n 9) \"created_at\"\n 10) \"2023-08-29T10:55:23.048948Z\"\n 11) \"status\"\n 12) \"enabled\"\n 13) \"occurred_at\"\n 14) \"1693308867001792403\"\n 15) \"operation\"\n 16) \"user.view_profile\"\n
"},{"location":"events/#user-list-event","title":"User list event","text":"Whenever user list is fetched, users
service will generate new list
event. This event will have the following format:
1) \"1693307172254-0\"\n2) 1) \"status\"\n 2) \"enabled\"\n 3) \"occurred_at\"\n 4) \"1693307172254687479\"\n 5) \"operation\"\n 6) \"user.list\"\n 7) \"total\"\n 8) \"0\"\n 9) \"offset\"\n 10) \"0\"\n 11) \"limit\"\n 12) \"10\"\n
"},{"location":"events/#user-list-by-group-event","title":"User list by group event","text":"Whenever user list by group is fetched, users
service will generate new list_by_group
event. This event will have the following format:
1) \"1693308952544-0\"\n2) 1) \"operation\"\n 2) \"user.list_by_group\"\n 3) \"total\"\n 4) \"0\"\n 5) \"offset\"\n 6) \"0\"\n 7) \"limit\"\n 8) \"10\"\n 9) \"group_id\"\n 10) \"bc7fb023-70d5-41aa-bf73-3eab1cf001c9\"\n 11) \"status\"\n 12) \"enabled\"\n 13) \"occurred_at\"\n 14) \"1693308952544612677\"\n
"},{"location":"events/#user-identify-event","title":"User identify event","text":"Whenever user is identified, users
service will generate new identify
event. This event will have the following format:
1) \"1693307172168-0\"\n2) 1) \"operation\"\n 2) \"user.identify\"\n 3) \"user_id\"\n 4) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 5) \"occurred_at\"\n 6) \"1693307172167980303\"\n
"},{"location":"events/#user-generate-reset-token-event","title":"User generate reset token event","text":"Whenever user reset token is generated, users
service will generate new generate_reset_token
event. This event will have the following format:
1) \"1693310458376-0\"\n2) 1) \"operation\"\n 2) \"user.generate_reset_token\"\n 3) \"email\"\n 4) \"rodneydav@gmail.com\"\n 5) \"host\"\n 6) \"http://localhost\"\n 7) \"occurred_at\"\n 8) \"1693310458376066097\"\n
"},{"location":"events/#user-issue-token-event","title":"User issue token event","text":"Whenever user token is issued, users
service will generate new issue_token
event. This event will have the following format:
1) \"1693307171987-0\"\n2) 1) \"operation\"\n 2) \"user.issue_token\"\n 3) \"identity\"\n 4) \"-small-flower@email.com\"\n 5) \"occurred_at\"\n 6) \"1693307171987023095\"\n
"},{"location":"events/#user-refresh-token-event","title":"User refresh token event","text":"Whenever user token is refreshed, users
service will generate new refresh_token
event. This event will have the following format:
1) \"1693309886622-0\"\n2) 1) \"operation\"\n 2) \"user.refresh_token\"\n 3) \"occurred_at\"\n 4) \"1693309886622414715\"\n
"},{"location":"events/#user-reset-secret-event","title":"User reset secret event","text":"Whenever user secret is reset, users
service will generate new reset_secret
event. This event will have the following format:
1) \"1693311075789-0\"\n2) 1) \"operation\"\n 2) \"user.update_secret\"\n 3) \"updated_by\"\n 4) \"34591d29-13eb-49f8-995b-e474911eeb8a\"\n 5) \"name\"\n 6) \"rodney\"\n 7) \"created_at\"\n 8) \"2023-08-29T11:59:51.456429Z\"\n 9) \"status\"\n 10) \"enabled\"\n 11) \"occurred_at\"\n 12) \"1693311075789446621\"\n 13) \"updated_at\"\n 14) \"2023-08-29T12:11:15.785039Z\"\n 15) \"id\"\n 16) \"34591d29-13eb-49f8-995b-e474911eeb8a\"\n 17) \"identity\"\n 18) \"rodneydav@gmail.com\"\n 19) \"metadata\"\n 20) \"{}\"\n
"},{"location":"events/#user-update-event","title":"User update event","text":"Whenever user instance is updated, users
service will generate new update
event. This event will have the following format:
1) \"1693307172308-0\"\n2) 1) \"operation\"\n 2) \"user.update\"\n 3) \"updated_by\"\n 4) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 5) \"id\"\n 6) \"56d2a797-dcb9-4fab-baf9-7c75e707b2c0\"\n 7) \"metadata\"\n 8) \"{\\\"Update\\\":\\\"rough-leaf\\\"}\"\n 9) \"updated_at\"\n 10) \"2023-08-29T11:06:12.294444Z\"\n 11) \"name\"\n 12) \"fragrant-voice\"\n 13) \"identity\"\n 14) \"-snowy-wave@email.com\"\n 15) \"created_at\"\n 16) \"2023-08-29T11:06:12.032254Z\"\n 17) \"status\"\n 18) \"enabled\"\n 19) \"occurred_at\"\n 20) \"1693307172308305030\"\n
"},{"location":"events/#user-update-identity-event","title":"User update identity event","text":"Whenever user identity is updated, users
service will generate new update_identity
event. This event will have the following format:
1) \"1693307172321-0\"\n2) 1) \"metadata\"\n 2) \"{\\\"Update\\\":\\\"rough-leaf\\\"}\"\n 3) \"created_at\"\n 4) \"2023-08-29T11:06:12.032254Z\"\n 5) \"status\"\n 6) \"enabled\"\n 7) \"updated_at\"\n 8) \"2023-08-29T11:06:12.310276Z\"\n 9) \"updated_by\"\n 10) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 11) \"id\"\n 12) \"56d2a797-dcb9-4fab-baf9-7c75e707b2c0\"\n 13) \"name\"\n 14) \"fragrant-voice\"\n 15) \"operation\"\n 16) \"user.update_identity\"\n 17) \"identity\"\n 18) \"wandering-brook\"\n 19) \"occurred_at\"\n 20) \"1693307172320906479\"\n
"},{"location":"events/#user-update-tags-event","title":"User update tags event","text":"Whenever user tags are updated, users
service will generate new update_tags
event. This event will have the following format:
1) \"1693307172332-0\"\n2) 1) \"name\"\n 2) \"fragrant-voice\"\n 3) \"identity\"\n 4) \"wandering-brook\"\n 5) \"metadata\"\n 6) \"{\\\"Update\\\":\\\"rough-leaf\\\"}\"\n 7) \"status\"\n 8) \"enabled\"\n 9) \"updated_at\"\n 10) \"2023-08-29T11:06:12.323039Z\"\n 11) \"updated_by\"\n 12) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 13) \"id\"\n 14) \"56d2a797-dcb9-4fab-baf9-7c75e707b2c0\"\n 15) \"occurred_at\"\n 16) \"1693307172332766275\"\n 17) \"operation\"\n 18) \"user.update_tags\"\n 19) \"tags\"\n 20) \"[patient-thunder]\"\n 21) \"created_at\"\n 22) \"2023-08-29T11:06:12.032254Z\"\n
"},{"location":"events/#user-remove-event","title":"User remove event","text":"Whenever user instance changes state in the system, users
service will generate and publish new remove
event. This event will have the following format:
1) \"1693307172345-0\"\n2) 1) \"operation\"\n 2) \"user.remove\"\n 3) \"id\"\n 4) \"56d2a797-dcb9-4fab-baf9-7c75e707b2c0\"\n 5) \"status\"\n 6) \"disabled\"\n 7) \"updated_at\"\n 8) \"2023-08-29T11:06:12.323039Z\"\n 9) \"updated_by\"\n 10) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 11) \"occurred_at\"\n 12) \"1693307172345419824\"\n\n1) \"1693307172359-0\"\n2) 1) \"id\"\n 2) \"56d2a797-dcb9-4fab-baf9-7c75e707b2c0\"\n 3) \"status\"\n 4) \"enabled\"\n 5) \"updated_at\"\n 6) \"2023-08-29T11:06:12.323039Z\"\n 7) \"updated_by\"\n 8) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 9) \"occurred_at\"\n 10) \"1693307172359445655\"\n 11) \"operation\"\n 12) \"user.remove\"\n
"},{"location":"events/#group-create-event","title":"Group create event","text":"Whenever group is created, users
service will generate new create
event. This event will have the following format:
1) \"1693307172153-0\"\n2) 1) \"name\"\n 2) \"-fragrant-resonance\"\n 3) \"metadata\"\n 4) \"{}\"\n 5) \"occurred_at\"\n 6) \"1693307172152850138\"\n 7) \"operation\"\n 8) \"group.create\"\n 9) \"id\"\n 10) \"bc7fb023-70d5-41aa-bf73-3eab1cf001c9\"\n 11) \"status\"\n 12) \"enabled\"\n 13) \"created_at\"\n 14) \"2023-08-29T11:06:12.129484Z\"\n 15) \"owner\"\n 16) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n
As you can see from this example, every odd field represents field name while every even field represents field value. This is standard event format for Redis Streams. If you want to extract metadata
field from this event, you'll have to read it as string first and then you can deserialize it to some structured format.
Whenever group instance is updated, users
service will generate new update
event. This event will have the following format:
1) \"1693307172445-0\"\n2) 1) \"operation\"\n 2) \"group.update\"\n 3) \"owner\"\n 4) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 5) \"name\"\n 6) \"young-paper\"\n 7) \"occurred_at\"\n 8) \"1693307172445370750\"\n 9) \"updated_at\"\n 10) \"2023-08-29T11:06:12.429555Z\"\n 11) \"updated_by\"\n 12) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 13) \"id\"\n 14) \"bc7fb023-70d5-41aa-bf73-3eab1cf001c9\"\n 15) \"metadata\"\n 16) \"{\\\"Update\\\":\\\"spring-wood\\\"}\"\n 17) \"created_at\"\n 18) \"2023-08-29T11:06:12.129484Z\"\n 19) \"status\"\n 20) \"enabled\"\n
"},{"location":"events/#group-view-event","title":"Group view event","text":"Whenever group is viewed, users
service will generate new view
event. This event will have the following format:
1) \"1693307172257-0\"\n2) 1) \"occurred_at\"\n 2) \"1693307172257041358\"\n 3) \"operation\"\n 4) \"group.view\"\n 5) \"id\"\n 6) \"bc7fb023-70d5-41aa-bf73-3eab1cf001c9\"\n 7) \"owner\"\n 8) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 9) \"name\"\n 10) \"-fragrant-resonance\"\n 11) \"metadata\"\n 12) \"{}\"\n 13) \"created_at\"\n 14) \"2023-08-29T11:06:12.129484Z\"\n 15) \"status\"\n 16) \"enabled\"\n
"},{"location":"events/#group-list-event","title":"Group list event","text":"Whenever group list is fetched, users
service will generate new list
event. This event will have the following format:
1) \"1693307172264-0\"\n2) 1) \"occurred_at\"\n 2) \"1693307172264183217\"\n 3) \"operation\"\n 4) \"group.list\"\n 5) \"total\"\n 6) \"0\"\n 7) \"offset\"\n 8) \"0\"\n 9) \"limit\"\n 10) \"10\"\n 11) \"status\"\n 12) \"enabled\"\n
"},{"location":"events/#group-list-by-user-event","title":"Group list by user event","text":"Whenever group list by user is fetched, users
service will generate new list_by_user
event. This event will have the following format:
1) \"1693308937283-0\"\n2) 1) \"limit\"\n 2) \"10\"\n 3) \"channel_id\"\n 4) \"bb1a7b38-cd79-410d-aca7-e744decd7b8e\"\n 5) \"status\"\n 6) \"enabled\"\n 7) \"occurred_at\"\n 8) \"1693308937282933017\"\n 9) \"operation\"\n 10) \"group.list_by_user\"\n 11) \"total\"\n 12) \"0\"\n 13) \"offset\"\n 14) \"0\"\n
"},{"location":"events/#group-remove-event","title":"Group remove event","text":"Whenever group instance changes state in the system, users
service will generate and publish new remove
event. This event will have the following format:
1) \"1693307172460-0\"\n2) 1) \"updated_by\"\n 2) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 3) \"occurred_at\"\n 4) \"1693307172459828786\"\n 5) \"operation\"\n 6) \"group.remove\"\n 7) \"id\"\n 8) \"bc7fb023-70d5-41aa-bf73-3eab1cf001c9\"\n 9) \"status\"\n 10) \"disabled\"\n 11) \"updated_at\"\n 12) \"2023-08-29T11:06:12.429555Z\"\n\n1) \"1693307172473-0\"\n2) 1) \"id\"\n 2) \"bc7fb023-70d5-41aa-bf73-3eab1cf001c9\"\n 3) \"status\"\n 4) \"enabled\"\n 5) \"updated_at\"\n 6) \"2023-08-29T11:06:12.429555Z\"\n 7) \"updated_by\"\n 8) \"e1b982d8-a332-4bc2-aaff-4bbaa86880fc\"\n 9) \"occurred_at\"\n 10) \"1693307172473661564\"\n 11) \"operation\"\n 12) \"group.remove\"\n
"},{"location":"events/#policy-authorize-event","title":"Policy authorize event","text":"Whenever policy is authorized, users
service will generate new authorize
event. This event will have the following format:
1) \"1693311470724-0\"\n2) 1) \"entity_type\"\n 2) \"thing\"\n 3) \"object\"\n 4) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 5) \"actions\"\n 6) \"c_list\"\n 7) \"occurred_at\"\n 8) \"1693311470724174126\"\n 9) \"operation\"\n 10) \"policies.authorize\"\n
"},{"location":"events/#policy-add-event","title":"Policy add event","text":"Whenever policy is added, users
service will generate new add
event. This event will have the following format:
1) \"1693311470721-0\"\n2) 1) \"operation\"\n 2) \"policies.add\"\n 3) \"owner_id\"\n 4) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 5) \"subject\"\n 6) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 7) \"object\"\n 8) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 9) \"actions\"\n 10) \"[g_add,c_list]\"\n 11) \"created_at\"\n 12) \"2023-08-29T12:17:50.715541Z\"\n 13) \"occurred_at\"\n 14) \"1693311470721394773\"\n
"},{"location":"events/#policy-update-event","title":"Policy update event","text":"Whenever policy is updated, users
service will generate new update
event. This event will have the following format:
1) \"1693312500101-0\"\n2) 1) \"updated_at\"\n 2) \"2023-08-29T12:35:00.095028Z\"\n 3) \"occurred_at\"\n 4) \"1693312500101367995\"\n 5) \"operation\"\n 6) \"policies.update\"\n 7) \"owner_id\"\n 8) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 9) \"subject\"\n 10) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 11) \"object\"\n 12) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 13) \"actions\"\n 14) \"[g_add,c_list]\"\n 15) \"created_at\"\n 16) \"2023-08-29T12:17:50.715541Z\"\n
"},{"location":"events/#policy-remove-event","title":"Policy remove event","text":"Whenever policy is removed, users
service will generate new remove
event. This event will have the following format:
1) \"1693312590631-0\"\n2) 1) \"occurred_at\"\n 2) \"1693312590631691388\"\n 3) \"operation\"\n 4) \"policies.delete\"\n 5) \"subject\"\n 6) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 7) \"object\"\n 8) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 9) \"actions\"\n 10) \"[g_add,c_list]\"\n
"},{"location":"events/#policy-list-event","title":"Policy list event","text":"Whenever policy list is fetched, things
service will generate new list
event. This event will have the following format:
1) \"1693312633649-0\"\n2) 1) \"operation\"\n 2) \"policies.list\"\n 3) \"total\"\n 4) \"0\"\n 5) \"limit\"\n 6) \"10\"\n 7) \"offset\"\n 8) \"0\"\n 9) \"occurred_at\"\n 10) \"1693312633649171129\"\n
"},{"location":"events/#things-service","title":"Things Service","text":"For every operation that has side effects (that is changing service state) things
service will generate new event and publish it to Redis Stream called mainflux.things
. Every event has its own event ID that is automatically generated and operation
field that can have one of the following values:
thing.create
for thing creationthing.update
for thing updatething.remove
for thing change of statething.view
for thing viewthing.list
for listing thingsthing.list_by_channel
for listing things by channelthing.identify
for thing identificationchannel.create
for channel creationchannel.update
for channel updatechannel.remove
for channel change of statechannel.view
for channel viewchannel.list
for listing channelschannel.list_by_thing
for listing channels by thingpolicy.authorize
for policy authorizationpolicy.add
for policy creationpolicy.update
for policy updatepolicy.remove
for policy deletionpolicy.list
for listing policiesBy fetching and processing these events you can reconstruct things
service state. If you store some of your custom data in metadata
field, this is the perfect way to fetch it and process it. If you want to integrate through docker-compose.yml you can use mainflux-es-redis
service. Just connect to it and consume events from Redis Stream named mainflux.things
.
Whenever thing is created, things
service will generate new create
event. This event will have the following format:
1) 1) \"1693311470576-0\"\n2) 1) \"operation\"\n 2) \"thing.create\"\n 3) \"id\"\n 4) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 5) \"status\"\n 6) \"enabled\"\n 7) \"created_at\"\n 8) \"2023-08-29T12:17:50.566453Z\"\n 9) \"name\"\n 10) \"-broken-cloud\"\n 11) \"owner\"\n 12) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 13) \"metadata\"\n 14) \"{}\"\n 15) \"occurred_at\"\n 16) \"1693311470576589894\"\n
As you can see from this example, every odd field represents field name while every even field represents field value. This is standard event format for Redis Streams. If you want to extract metadata
field from this event, you'll have to read it as string first and then you can deserialize it to some structured format.
Whenever thing instance is updated, things
service will generate new update
event. This event will have the following format:
1) \"1693311470669-0\"\n2) 1) \"operation\"\n 2) \"thing.update\"\n 3) \"updated_at\"\n 4) \"2023-08-29T12:17:50.665752Z\"\n 5) \"updated_by\"\n 6) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 7) \"owner\"\n 8) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 9) \"created_at\"\n 10) \"2023-08-29T12:17:50.566453Z\"\n 11) \"status\"\n 12) \"enabled\"\n 13) \"id\"\n 14) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 15) \"name\"\n 16) \"lingering-sea\"\n 17) \"metadata\"\n 18) \"{\\\"Update\\\":\\\"nameless-glitter\\\"}\"\n 19) \"occurred_at\"\n 20) \"1693311470669567023\"\n
"},{"location":"events/#thing-update-secret-event","title":"Thing update secret event","text":"Whenever thing secret is updated, things
service will generate new update_secret
event. This event will have the following format:
1) \"1693311470676-0\"\n2) 1) \"id\"\n 2) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 3) \"name\"\n 4) \"lingering-sea\"\n 5) \"metadata\"\n 6) \"{\\\"Update\\\":\\\"nameless-glitter\\\"}\"\n 7) \"status\"\n 8) \"enabled\"\n 9) \"occurred_at\"\n 10) \"1693311470676563107\"\n 11) \"operation\"\n 12) \"thing.update_secret\"\n 13) \"updated_at\"\n 14) \"2023-08-29T12:17:50.672865Z\"\n 15) \"updated_by\"\n 16) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 17) \"owner\"\n 18) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 19) \"created_at\"\n 20) \"2023-08-29T12:17:50.566453Z\"\n
"},{"location":"events/#thing-update-tags-event","title":"Thing update tags event","text":"Whenever thing tags are updated, things
service will generate new update_tags
event. This event will have the following format:
1) \"1693311470682-0\"\n2) 1) \"operation\"\n 2) \"thing.update_tags\"\n 3) \"owner\"\n 4) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 5) \"metadata\"\n 6) \"{\\\"Update\\\":\\\"nameless-glitter\\\"}\"\n 7) \"status\"\n 8) \"enabled\"\n 9) \"occurred_at\"\n 10) \"1693311470682522926\"\n 11) \"updated_at\"\n 12) \"2023-08-29T12:17:50.679301Z\"\n 13) \"updated_by\"\n 14) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 15) \"id\"\n 16) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 17) \"name\"\n 18) \"lingering-sea\"\n 19) \"tags\"\n 20) \"[morning-pine]\"\n 21) \"created_at\"\n 22) \"2023-08-29T12:17:50.566453Z\"\n
"},{"location":"events/#thing-remove-event","title":"Thing remove event","text":"Whenever thing instance is removed from the system, things
service will generate and publish new remove
event. This event will have the following format:
1) \"1693311470689-0\"\n2) 1) \"updated_by\"\n 2) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 3) \"occurred_at\"\n 4) \"1693311470688911826\"\n 5) \"operation\"\n 6) \"thing.remove\"\n 7) \"id\"\n 8) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 9) \"status\"\n 10) \"disabled\"\n 11) \"updated_at\"\n 12) \"2023-08-29T12:17:50.679301Z\"\n\n1) \"1693311470695-0\"\n2) 1) \"operation\"\n 2) \"thing.remove\"\n 3) \"id\"\n 4) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 5) \"status\"\n 6) \"enabled\"\n 7) \"updated_at\"\n 8) \"2023-08-29T12:17:50.679301Z\"\n 9) \"updated_by\"\n 10) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 11) \"occurred_at\"\n 12) \"1693311470695446948\"\n
"},{"location":"events/#thing-view-event","title":"Thing view event","text":"Whenever thing is viewed, things
service will generate new view
event. This event will have the following format:
1) \"1693311470608-0\"\n2) 1) \"operation\"\n 2) \"thing.view\"\n 3) \"id\"\n 4) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 5) \"name\"\n 6) \"-broken-cloud\"\n 7) \"owner\"\n 8) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 9) \"metadata\"\n 10) \"{}\"\n 11) \"created_at\"\n 12) \"2023-08-29T12:17:50.566453Z\"\n 13) \"status\"\n 14) \"enabled\"\n 15) \"occurred_at\"\n 16) \"1693311470608701504\"\n
"},{"location":"events/#thing-list-event","title":"Thing list event","text":"Whenever thing list is fetched, things
service will generate new list
event. This event will have the following format:
1) \"1693311470613-0\"\n2) 1) \"occurred_at\"\n 2) \"1693311470613173088\"\n 3) \"operation\"\n 4) \"thing.list\"\n 5) \"total\"\n 6) \"0\"\n 7) \"offset\"\n 8) \"0\"\n 9) \"limit\"\n 10) \"10\"\n 11) \"status\"\n 12) \"enabled\"\n
"},{"location":"events/#thing-list-by-channel-event","title":"Thing list by channel event","text":"Whenever thing list by channel is fetched, things
service will generate new list_by_channel
event. This event will have the following format:
1) \"1693312322620-0\"\n2) 1) \"operation\"\n 2) \"thing.list_by_channel\"\n 3) \"total\"\n 4) \"0\"\n 5) \"offset\"\n 6) \"0\"\n 7) \"limit\"\n 8) \"10\"\n 9) \"channel_id\"\n 10) \"8d77099e-4911-4140-8555-7d3be65a1694\"\n 11) \"status\"\n 12) \"enabled\"\n 13) \"occurred_at\"\n 14) \"1693312322620481072\"\n
"},{"location":"events/#thing-identify-event","title":"Thing identify event","text":"Whenever thing is identified, things
service will generate new identify
event. This event will have the following format:
1) \"1693312391155-0\"\n2) 1) \"operation\"\n 2) \"thing.identify\"\n 3) \"thing_id\"\n 4) \"dc82d6bf-973b-4582-9806-0230cee11c20\"\n 5) \"occurred_at\"\n 6) \"1693312391155123548\"\n
"},{"location":"events/#channel-create-event","title":"Channel create event","text":"Whenever channel instance is created, things
service will generate and publish new create
event. This event will have the following format:
1) 1) \"1693311470584-0\"\n2) 1) \"owner\"\n 2) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 3) \"name\"\n 4) \"-frosty-moon\"\n 5) \"metadata\"\n 6) \"{}\"\n 7) \"occurred_at\"\n 8) \"1693311470584416323\"\n 9) \"operation\"\n 10) \"channel.create\"\n 11) \"id\"\n 12) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 13) \"status\"\n 14) \"enabled\"\n 15) \"created_at\"\n 16) \"2023-08-29T12:17:50.57866Z\"\n
"},{"location":"events/#channel-update-event","title":"Channel update event","text":"Whenever channel instance is updated, things
service will generate and publish new update
event. This event will have the following format:
1) \"1693311470701-0\"\n2) 1) \"updated_by\"\n 2) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 3) \"owner\"\n 4) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 5) \"created_at\"\n 6) \"2023-08-29T12:17:50.57866Z\"\n 7) \"status\"\n 8) \"enabled\"\n 9) \"operation\"\n 10) \"channel.update\"\n 11) \"updated_at\"\n 12) \"2023-08-29T12:17:50.698278Z\"\n 13) \"metadata\"\n 14) \"{\\\"Update\\\":\\\"silent-hill\\\"}\"\n 15) \"occurred_at\"\n 16) \"1693311470701812291\"\n 17) \"id\"\n 18) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 19) \"name\"\n 20) \"morning-forest\"\n
Note that update channel event will contain only those fields that were updated using update channel endpoint.
"},{"location":"events/#channel-remove-event","title":"Channel remove event","text":"Whenever channel instance is removed from the system, things
service will generate and publish new remove
event. This event will have the following format:
1) \"1693311470708-0\"\n2) 1) \"status\"\n 2) \"disabled\"\n 3) \"updated_at\"\n 4) \"2023-08-29T12:17:50.698278Z\"\n 5) \"updated_by\"\n 6) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 7) \"occurred_at\"\n 8) \"1693311470708219296\"\n 9) \"operation\"\n 10) \"channel.remove\"\n 11) \"id\"\n 12) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n\n1) \"1693311470714-0\"\n2) 1) \"occurred_at\"\n 2) \"1693311470714118979\"\n 3) \"operation\"\n 4) \"channel.remove\"\n 5) \"id\"\n 6) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 7) \"status\"\n 8) \"enabled\"\n 9) \"updated_at\"\n 10) \"2023-08-29T12:17:50.698278Z\"\n 11) \"updated_by\"\n 12) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n
"},{"location":"events/#channel-view-event","title":"Channel view event","text":"Whenever channel is viewed, things
service will generate new view
event. This event will have the following format:
1) \"1693311470615-0\"\n2) 1) \"id\"\n 2) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 3) \"owner\"\n 4) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 5) \"name\"\n 6) \"-frosty-moon\"\n 7) \"metadata\"\n 8) \"{}\"\n 9) \"created_at\"\n 10) \"2023-08-29T12:17:50.57866Z\"\n 11) \"status\"\n 12) \"enabled\"\n 13) \"occurred_at\"\n 14) \"1693311470615693019\"\n 15) \"operation\"\n 16) \"channel.view\"\n
"},{"location":"events/#channel-list-event","title":"Channel list event","text":"Whenever channel list is fetched, things
service will generate new list
event. This event will have the following format:
1) \"1693311470619-0\"\n2) 1) \"limit\"\n 2) \"10\"\n 3) \"status\"\n 4) \"enabled\"\n 5) \"occurred_at\"\n 6) \"1693311470619194337\"\n 7) \"operation\"\n 8) \"channel.list\"\n 9) \"total\"\n 10) \"0\"\n 11) \"offset\"\n 12) \"0\"\n
"},{"location":"events/#channel-list-by-thing-event","title":"Channel list by thing event","text":"Whenever channel list by thing is fetched, things
service will generate new list_by_thing
event. This event will have the following format:
1) \"1693312299484-0\"\n2) 1) \"occurred_at\"\n 2) \"1693312299484000183\"\n 3) \"operation\"\n 4) \"channel.list_by_thing\"\n 5) \"total\"\n 6) \"0\"\n 7) \"offset\"\n 8) \"0\"\n 9) \"limit\"\n 10) \"10\"\n 11) \"thing_id\"\n 12) \"dc82d6bf-973b-4582-9806-0230cee11c20\"\n 13) \"status\"\n 14) \"enabled\"\n
"},{"location":"events/#policy-authorize-event_1","title":"Policy authorize event","text":"Whenever policy is authorized, things
service will generate new authorize
event. This event will have the following format:
1) \"1693311470724-0\"\n2) 1) \"entity_type\"\n 2) \"thing\"\n 3) \"object\"\n 4) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 5) \"actions\"\n 6) \"m_read\"\n 7) \"occurred_at\"\n 8) \"1693311470724174126\"\n 9) \"operation\"\n 10) \"policies.authorize\"\n
"},{"location":"events/#policy-add-event_1","title":"Policy add event","text":"Whenever policy is added, things
service will generate new add
event. This event will have the following format:
1) \"1693311470721-0\"\n2) 1) \"operation\"\n 2) \"policies.add\"\n 3) \"owner_id\"\n 4) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 5) \"subject\"\n 6) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 7) \"object\"\n 8) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 9) \"actions\"\n 10) \"[m_write,m_read]\"\n 11) \"created_at\"\n 12) \"2023-08-29T12:17:50.715541Z\"\n 13) \"occurred_at\"\n 14) \"1693311470721394773\"\n
"},{"location":"events/#policy-update-event_1","title":"Policy update event","text":"Whenever policy is updated, things
service will generate new update
event. This event will have the following format:
1) \"1693312500101-0\"\n2) 1) \"updated_at\"\n 2) \"2023-08-29T12:35:00.095028Z\"\n 3) \"occurred_at\"\n 4) \"1693312500101367995\"\n 5) \"operation\"\n 6) \"policies.update\"\n 7) \"owner_id\"\n 8) \"fe2e5de0-9900-4ac5-b364-eae0c35777fb\"\n 9) \"subject\"\n 10) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 11) \"object\"\n 12) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 13) \"actions\"\n 14) \"[m_write,m_read]\"\n 15) \"created_at\"\n 16) \"2023-08-29T12:17:50.715541Z\"\n
"},{"location":"events/#policy-remove-event_1","title":"Policy remove event","text":"Whenever policy is removed, things
service will generate new remove
event. This event will have the following format:
1) \"1693312590631-0\"\n2) 1) \"occurred_at\"\n 2) \"1693312590631691388\"\n 3) \"operation\"\n 4) \"policies.delete\"\n 5) \"subject\"\n 6) \"12510af8-b6a7-410d-944c-9feded199d6d\"\n 7) \"object\"\n 8) \"8a85e2d5-e783-43ee-8bea-d6d0f8039e78\"\n 9) \"actions\"\n 10) \"[m_write,m_read]\"\n
"},{"location":"events/#policy-list-event_1","title":"Policy list event","text":"Whenever policy list is fetched, things
service will generate new list
event. This event will have the following format:
1) \"1693312633649-0\"\n2) 1) \"operation\"\n 2) \"policies.list\"\n 3) \"total\"\n 4) \"0\"\n 5) \"limit\"\n 6) \"10\"\n 7) \"offset\"\n 8) \"0\"\n 9) \"occurred_at\"\n 10) \"1693312633649171129\"\n
Note: Every one of these events will omit fields that were not used or are not relevant for specific operation. Also, field ordering is not guaranteed, so DO NOT rely on it.
"},{"location":"events/#bootstrap-service","title":"Bootstrap Service","text":"Bootstrap service publishes events to Redis Stream called mainflux.bootstrap
. Every event from this service contains operation
field which indicates one of the following event types:
config.create
for configuration creation,config.update
for configuration update,config.remove
for configuration removal,thing.bootstrap
for device bootstrap,thing.state_change
for device state change,thing.update_connections
for device connection update.If you want to integrate through docker-compose.yml you can use mainflux-es-redis
service. Just connect to it and consume events from Redis Stream named mainflux.bootstrap
.
Whenever configuration is created, bootstrap
service will generate and publish new create
event. This event will have the following format:
1) \"1693313286544-0\"\n2) 1) \"state\"\n 2) \"0\"\n 3) \"operation\"\n 4) \"config.create\"\n 5) \"name\"\n 6) \"demo\"\n 7) \"channels\"\n 8) \"[8d77099e-4911-4140-8555-7d3be65a1694]\"\n 9) \"client_cert\"\n 10) \"-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----\"\n 11) \"ca_cert\"\n 12) \"-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----\"\n 13) \"occurred_at\"\n 14) \"1693313286544243035\"\n 15) \"thing_id\"\n 16) \"dc82d6bf-973b-4582-9806-0230cee11c20\"\n 17) \"content\"\n 18) \"{ \\\"server\\\": { \\\"address\\\": \\\"127.0.0.1\\\", \\\"port\\\": 8080 }, \\\"database\\\": { \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 5432, \\\"username\\\": \\\"user\\\", \\\"password\\\": \\\"password\\\", \\\"dbname\\\": \\\"mydb\\\" }, \\\"logging\\\": { \\\"level\\\": \\\"info\\\", \\\"file\\\": \\\"app.log\\\" } }\"\n 19) \"owner\"\n 20) \"64fd20bf-e8fb-46bf-9b64-2a6572eda21b\"\n 21) \"external_id\"\n 22) \"209327A2FA2D47E3B05F118D769DC521\"\n 23) \"client_key\"\n 24) \"-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----\"\n
"},{"location":"events/#configuration-update-event","title":"Configuration update event","text":"Whenever configuration is updated, bootstrap
service will generate and publish new update
event. This event will have the following format:
1) \"1693313985263-0\"\n2) 1) \"state\"\n 2) \"0\"\n 3) \"operation\"\n 4) \"config.update\"\n 5) \"thing_id\"\n 6) \"dc82d6bf-973b-4582-9806-0230cee11c20\"\n 7) \"content\"\n 8) \"{ \\\"server\\\": { \\\"address\\\": \\\"127.0.0.1\\\", \\\"port\\\": 8080 }, \\\"database\\\": { \\\"host\\\": \\\"localhost\\\", \\\"port\\\": 5432, \\\"username\\\": \\\"user\\\", \\\"password\\\": \\\"password\\\", \\\"dbname\\\": \\\"mydb\\\" } }\"\n 9) \"name\"\n 10) \"demo\"\n 11) \"occurred_at\"\n 12) \"1693313985263381501\"\n
"},{"location":"events/#certificate-update-event","title":"Certificate update event","text":"Whenever certificate is updated, bootstrap
service will generate and publish new update
event. This event will have the following format:
1) \"1693313759203-0\"\n2) 1) \"thing_key\"\n 2) \"dc82d6bf-973b-4582-9806-0230cee11c20\"\n 3) \"client_cert\"\n 4) \"-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----\"\n 5) \"client_key\"\n 6) \"-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----\"\n 7) \"ca_cert\"\n 8) \"-----BEGIN ENCRYPTED PRIVATE KEY-----MIIFHDBOBgkqhkiG9w0BBQ0wQTApBgkqhkiG9w0BBQwwHAQIc+VAU9JPnIkCAggAMAwGCCqGSIb3DQIJBQAwFAYIKoZIhvcNAwcECImSB+9qZ8dmBIIEyBW/rZlECWnEcMuTXhfJFe+3HP4rV+TXEEuigwCbtVPHWXoZj7KqGiOFgFaDL5Ne/GRwVD6geaTeQVl3aoHzo8mY0yuX2L36Ho2yHF/Bw89WT3hgP0lZ1lVO7O7n8DwybOaoJ+1S3akyb6OPbqcxJou1IGzKV1kz77R8V8nOFSd1BOepNbanGxVG8Jkgc37dQnICXwwaYkTx9PQBtSux1j3KgX0p+VAUNoUFi7N6b0MeO8iEuLU1dUiVwlH/jtitg0W3AvSV+5gezTT2VQW3CVlz6IBTPI1Rfl/3ss18Tao0NiPUmXMIgreBCamXvb0aJm8JxVbhoFYqWVNVocBD+n1+NwhCRlZM5Kgaes5S2JuFnjTAqEYytlQqEySbaN57XYCDNVmQz2iViz/+npuR9SCGwnNvV/TNsKRwav+0NC0pbf3LNk/KL9/X5ccmPhB5Rl7IS/v1BBLYX/jYWVN0dJiSA7fVIr9Acr7IbxWEQ2Y2qh1wdhayi4FBUHY3weivYSU3uGZizsSGJP/N6DutBgS1aXd5X/CqfF7VzRaKF4cfLO4XxTYUEjOztUNMN2XmW0o+ULjQmbouRPs/PIFmh6rc+h42m6p4SkjcsIKOy+mPTeJqhOVmYoMzO8+7mmXDOjFwvi/w97sdmbjII8Zn2iR/N8GuY23vv5h6LQ3tQ5kTA4IuPbYCVLeggd4iMM6TgpuJn0aG7yo4tDFqMeadCVhP2Bp3JQa8r3B2IJstTTF1OtZCrInjSus9ViOiM02Iz3ZmyglsMonJDlWeJL5jKBgqPbLR82IDhIY4IO6SqoVsWu4iWuLW5/TM3fdfYG3Wdvu7Suz7/anLAaMQEzKhObwgDdKmv4PkF75frex969CB1pQqSVnXmz4GrtxVUzWtlflaTSdSegpUXWLhG+jUNKTu+ptxDNM/JBxRNLSzdvsGbkI0qycOCliVpKkkvuiBGtiDWNax6KhV4/oRjkEkTRks9Xeko+q3uY4B//AGxsotsVhF5vhUDTOl5IX7a7wCPtbTGiaR79eprRzGnP9yP38djVrvXprJFU8P7GUr/f2qJt2jDYuCkaqAMsfjdu6YHitjj3ty4vrASgxJ0vsroWhjgiCwgASqM7GnweHSHy5/OZK8jCZX+g+B63Mu4ec+/nNnjvuLqBBZN/FSzXU5fVmYznfPaqW+1Xep+Aj1yGk3L3tvnKLc3sZ1HAJQEjud5dbME6e0JGxh5xOCnzWUR+YL/96KJAcgkxDJ1DxxHv0Uu/5kO5InOsPjs4YKuzqD4nUmGsFsJzTxG626wdGXJMO4YCRKkKtnNeWqMaslM3paN19/tTWyEbaDqc5mVzYLIb3Mzju+OV4GniDeVIvSIsXK5aFGj1PEhfCprQCqUzdNhFU8hF4kUVhn9dp0ExveT7btHSMlEZAWHRkDuLqaImpQkjYmwt90cxtdZwQvjTDtsFmQrvcSp8n1K3P5PwZpVtIw2UHpx+NjE8ZYwOozpXl/oOMzVTB8mi1dQGFkpac9cwnzCZof0ub4iutBeKc4WeEOytvY+CY7hc+/ncCprZ08nlkQarQV7jhfJj658GfBMLGzJtYkCrHwi/AoseIXa5W7eX+lz7O92H2M5QnEkPStQ9lsz2VkYA==-----END ENCRYPTED PRIVATE KEY-----\"\n 9) \"operation\"\n 10) \"cert.update\"\n 11) \"occurred_at\"\n 12) \"1693313759203076553\"\n
"},{"location":"events/#configuration-list-event","title":"Configuration list event","text":"Whenever configuration list is fetched, bootstrap
service will generate new list
event. This event will have the following format:
1) \"1693339274766-0\"\n2) 1) \"occurred_at\"\n 2) \"1693339274766130265\"\n 3) \"offset\"\n 4) \"0\"\n 5) \"limit\"\n 6) \"10\"\n 7) \"operation\"\n 8) \"config.list\"\n
"},{"location":"events/#configuration-view-event","title":"Configuration view event","text":"Whenever configuration is viewed, bootstrap
service will generate new view
event. This event will have the following format:
1) 1) \"1693339152105-0\"\n2) 1) \"thing_id\"\n 2) \"74f00d13-d370-42c0-b528-04fff995275c\"\n 3) \"name\"\n 4) \"demo\"\n 5) \"external_id\"\n 6) \"FF-41-EF-BC-90-BC\"\n 7) \"channels\"\n 8) \"[90aae157-d47f-4d71-9a68-b000c0025ae8]\"\n 9) \"client_cert\"\n 10) \"-----BEGIN PRIVATE KEY-----MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDVYaZsyd76aSWZexY/OyX8hVdE+ruT3OZrE6gFSjDiaAA2Uf5/eHT1BJdR4LviooXix8vfc/g5CAN/z98zmUmAzx9lk5T4sRhJfqYQ2yDEt1tVDwD3RzL9RHXRWiZu4thk253jOpT15VFvOf5wE6mhVozFl9OetVJb4eqKbHx9RY0rMXwiBiCC2LcUtcp6rVjp4pK6VGjehA8siVX9bnRsIY776jDb/pm2n+y5G+bd1CifSdgTrr7QLKFlx0//5lyslmfUbf76kg9bZ8Qe2NdFKvcpEZ4ENxtwMrqW2i1pTExVHNpka8rhA5936qpDKu1ce+kccIbFsPRAHU5PyXfNAgMBAAECggEAAtBt4c4WcGuwlkHxp4B/3hZix0Md9DOb9QTmWLjYxN5QRRHMbyFHPEVaOuHhjc9M6r0YgD2cTsw/QjvwmqfxOI9YFP6JnsS0faD7pF9EzbNes1QmVByOnJkpi0r1aiL4baQZL0+sz+1n/IqMQ4LO4D+ETcV/LKmsM2VbCDD+wfwsVkTmgaqKtXIFQ3bOU5LjRcxCZFs81z3mYDyP4hfnlmTWOOXcf8yLqx5LGH8erCGXgrhZiN5/mhkzUpkF75Eo4qt3jVZEt+d48RnPsk0TO0rqs4j5F3d/6Dboi3UpRlHZ4vEM7MeDGoMuXTh59MzbV1e/03sY2jTtB2NVQ51pFQKBgQD0kjYorDqu5e82Orp5rRkS58nUDgq3vaxNKJq+32LuuTuNjRrM57XoyBAVnBlfTP5IOQaxjYPNxHkZhYdYREyZKx96g6FZUWLQxKO+vP+E25MXSsnP8FMkQNfgSvMCxfIyFO3aVbDUme6bIScPNCTzKVWSWTj5Zyyig9VQpoRJ5wKBgQDfWlF7krUefQEvdJFxd9IGBvlkWkGi942Hh0H6vJCzhMQO8DeHZjO4oiiCEpRmBdkLDlZs81mykmyFEpjcmv4JD23HQ9IPi0/4Bsuu3SDXF4HC5/QYldaG0behBmMmDYuaQ0NHY5rpCnpZBteYT6V6lcBm/AIKwvz+N8fY2fDCKwKBgQDfBCjQw+SrMc8FI16Br7+KhsR7UuahEBt7LIiXfvom98//TuleafhuMWjBW9ujFIFXeHDLHWFQFFXdWO7HJVi33yPQQxGxcc5q0rUCLDPQga1Kcw8+R0Z5a4uu4olgQQKOepk+HB+obkmvOfb1HTaIaWu3jRawDk4cT50H8x/0hwKBgB63eB9LhNclj+Ur3djCBsNHcELp2r8D1pX99wf5qNjXeHMpfCmF17UbsAB7d6c0RK4tkZs4OGzDkGMYtKcaNbefRJSz8g6rNRtCK/7ncF3EYNciOUKsUK2H5/4gN8CC+mEDwRvvSd2k0ECwHTRYN8TNFYHURJ+gQ1Te7QAYsPCzAoGBAMZnbAY1Q/gK11JaPE2orFb1IltDRKB2IXh5Ton0ZCqhmOhMLQ+4t7DLPUKdXlsBZa/IIm5XehBg6VajbG0zulKLzO4YHuWEduwYON+4DNQxLWhBCBauOZ7+dcGUvYkeKoySYs6hznV9mlMHe1TuhCO8zHjpvBXOrlAR8VX5BXKz-----END PRIVATE KEY-----\"\n 11) \"state\"\n 12) \"0\"\n 13) \"operation\"\n 14) \"config.view\"\n 15) \"content\"\n 16) \"{\\\"device_id\\\": \\\"12345\\\",\\\"secure_connection\\\": true,\\\"sensor_config\\\": {\\\"temperature\\\": true,\\\"humidity\\\": true,\\\"pressure\\\": false}}\"\n 17) \"owner\"\n 18) \"b2972472-c93c-408f-9b77-0f8a81ee47af\"\n 19) \"occurred_at\"\n 20) \"1693339152105496336\"\n
"},{"location":"events/#configuration-remove-event","title":"Configuration remove event","text":"Whenever configuration is removed, bootstrap
service will generate and publish new remove
event. This event will have the following format:
1) \"1693339203771-0\"\n2) 1) \"occurred_at\"\n 2) \"1693339203771705590\"\n 3) \"thing_id\"\n 4) \"853f37b9-513a-41a2-a575-bbaa746961a6\"\n 5) \"operation\"\n 6) \"config.remove\"\n
"},{"location":"events/#configuration-remove-handler","title":"Configuration remove handler","text":"Whenever a thing is removed, bootstrap
service will generate and publish new config.remove_handler
event. This event will have the following format:
1) 1) \"1693337955655-0\"\n2) 1) \"config_id\"\n 2) \"0198b458-573e-415a-aa05-052ddab9709d\"\n 3) \"operation\"\n 4) \"config.remove_handler\"\n 5) \"occurred_at\"\n 6) \"1693337955654969489\"\n
"},{"location":"events/#thing-bootstrap-event","title":"Thing bootstrap event","text":"Whenever thing is bootstrapped, bootstrap
service will generate and publish new bootstrap
event. This event will have the following format:
1) 1) \"1693339161600-0\"\n2) 1) \"occurred_at\"\n 2) \"1693339161600369325\"\n 3) \"external_id\"\n 4) \"FF-41-EF-BC-90-BC\"\n 5) \"success\"\n 6) \"1\"\n 7) \"operation\"\n 8) \"thing.bootstrap\"\n 9) \"thing_id\"\n 10) \"74f00d13-d370-42c0-b528-04fff995275c\"\n 11) \"content\"\n 12) \"{\\\"device_id\\\": \\\"12345\\\",\\\"secure_connection\\\": true,\\\"sensor_config\\\": {\\\"temperature\\\": true,\\\"humidity\\\": true,\\\"pressure\\\": false}}\"\n 13) \"owner\"\n 14) \"b2972472-c93c-408f-9b77-0f8a81ee47af\"\n 15) \"name\"\n 16) \"demo\"\n 17) \"channels\"\n 18) \"[90aae157-d47f-4d71-9a68-b000c0025ae8]\"\n 19) \"ca_cert\"\n 20) \"-----BEGIN PRIVATE KEY-----MIIEvgIBADANBgkqhkiG9w0BAQEFAASCBKgwggSkAgEAAoIBAQDVYaZsyd76aSWZexY/OyX8hVdE+ruT3OZrE6gFSjDiaAA2Uf5/eHT1BJdR4LviooXix8vfc/g5CAN/z98zmUmAzx9lk5T4sRhJfqYQ2yDEt1tVDwD3RzL9RHXRWiZu4thk253jOpT15VFvOf5wE6mhVozFl9OetVJb4eqKbHx9RY0rMXwiBiCC2LcUtcp6rVjp4pK6VGjehA8siVX9bnRsIY776jDb/pm2n+y5G+bd1CifSdgTrr7QLKFlx0//5lyslmfUbf76kg9bZ8Qe2NdFKvcpEZ4ENxtwMrqW2i1pTExVHNpka8rhA5936qpDKu1ce+kccIbFsPRAHU5PyXfNAgMBAAECggEAAtBt4c4WcGuwlkHxp4B/3hZix0Md9DOb9QTmWLjYxN5QRRHMbyFHPEVaOuHhjc9M6r0YgD2cTsw/QjvwmqfxOI9YFP6JnsS0faD7pF9EzbNes1QmVByOnJkpi0r1aiL4baQZL0+sz+1n/IqMQ4LO4D+ETcV/LKmsM2VbCDD+wfwsVkTmgaqKtXIFQ3bOU5LjRcxCZFs81z3mYDyP4hfnlmTWOOXcf8yLqx5LGH8erCGXgrhZiN5/mhkzUpkF75Eo4qt3jVZEt+d48RnPsk0TO0rqs4j5F3d/6Dboi3UpRlHZ4vEM7MeDGoMuXTh59MzbV1e/03sY2jTtB2NVQ51pFQKBgQD0kjYorDqu5e82Orp5rRkS58nUDgq3vaxNKJq+32LuuTuNjRrM57XoyBAVnBlfTP5IOQaxjYPNxHkZhYdYREyZKx96g6FZUWLQxKO+vP+E25MXSsnP8FMkQNfgSvMCxfIyFO3aVbDUme6bIScPNCTzKVWSWTj5Zyyig9VQpoRJ5wKBgQDfWlF7krUefQEvdJFxd9IGBvlkWkGi942Hh0H6vJCzhMQO8DeHZjO4oiiCEpRmBdkLDlZs81mykmyFEpjcmv4JD23HQ9IPi0/4Bsuu3SDXF4HC5/QYldaG0behBmMmDYuaQ0NHY5rpCnpZBteYT6V6lcBm/AIKwvz+N8fY2fDCKwKBgQDfBCjQw+SrMc8FI16Br7+KhsR7UuahEBt7LIiXfvom98//TuleafhuMWjBW9ujFIFXeHDLHWFQFFXdWO7HJVi33yPQQxGxcc5q0rUCLDPQga1Kcw8+R0Z5a4uu4olgQQKOepk+HB+obkmvOfb1HTaIaWu3jRawDk4cT50H8x/0hwKBgB63eB9LhNclj+Ur3djCBsNHcELp2r8D1pX99wf5qNjXeHMpfCmF17UbsAB7d6c0RK4tkZs4OGzDkGMYtKcaNbefRJSz8g6rNRtCK/7ncF3EYNciOUKsUK2H5/4gN8CC+mEDwRvvSd2k0ECwHTRYN8TNFYHURJ+gQ1Te7QAYsPCzAoGBAMZnbAY1Q/gK11JaPE2orFb1IltDRKB2IXh5Ton0ZCqhmOhMLQ+4t7DLPUKdXlsBZa/IIm5XehBg6VajbG0zulKLzO4YHuWEduwYON+4DNQxLWhBCBauOZ7+dcGUvYkeKoySYs6hznV9mlMHe1TuhCO8zHjpvBXOrlAR8VX5BXKz-----END PRIVATE KEY-----\"\n\n
"},{"location":"events/#thing-change-state-event","title":"Thing change state event","text":"Whenever thing's state changes, bootstrap
service will generate and publish new change state
event. This event will have the following format:
1) \"1555405294806-0\"\n2) 1) \"thing_id\"\n 2) \"63a110d4-2b77-48d2-aa46-2582681eeb82\"\n 3) \"state\"\n 4) \"0\"\n 5) \"timestamp\"\n 6) \"1555405294\"\n 7) \"operation\"\n 8) \"thing.state_change\"\n
"},{"location":"events/#thing-update-connections-event","title":"Thing update connections event","text":"Whenever thing's list of connections is updated, bootstrap
service will generate and publish new update connections
event. This event will have the following format:
1) \"1555405373360-0\"\n2) 1) \"operation\"\n 2) \"thing.update_connections\"\n 3) \"thing_id\"\n 4) \"63a110d4-2b77-48d2-aa46-2582681eeb82\"\n 5) \"channels\"\n 6) \"ff13ca9c-7322-4c28-a25c-4fe5c7b753fc, 925461e6-edfb-4755-9242-8a57199b90a5, c3642289-501d-4974-82f2-ecccc71b2d82\"\n 7) \"timestamp\"\n 8) \"1555405373\"\n
"},{"location":"events/#channel-update-handler-event","title":"Channel update handler event","text":"Whenever channel is updated, bootstrap
service will generate and publish new update handler
event. This event will have the following format:
1) \"1693339403536-0\"\n2) 1) \"operation\"\n 2) \"channel.update_handler\"\n 3) \"channel_id\"\n 4) \"0e602731-36ba-4a29-adba-e5761f356158\"\n 5) \"name\"\n 6) \"dry-sky\"\n 7) \"metadata\"\n 8) \"{\\\"log\\\":\\\"info\\\"}\"\n 9) \"occurred_at\"\n 10) \"1693339403536636387\"\n
"},{"location":"events/#channel-remove-handler-event","title":"Channel remove handler event","text":"Whenever channel is removed, bootstrap
service will generate and publish new remove handler
event. This event will have the following format:
1) \"1693339468719-0\"\n2) 1) \"config_id\"\n 2) \"0198b458-573e-415a-aa05-052ddab9709d\"\n 3) \"operation\"\n 4) \"config.remove_handler\"\n 5) \"occurred_at\"\n 6) \"1693339468719177463\"\n
"},{"location":"events/#mqtt-adapter","title":"MQTT Adapter","text":"Instead of using heartbeat to know when client is connected through MQTT adapter one can fetch events from Redis Streams that MQTT adapter publishes. MQTT adapter publishes events every time client connects and disconnects to stream named mainflux.mqtt
.
Events that are coming from MQTT adapter have following fields:
thing_id
ID of a thing that has connected to MQTT adapter,event_type
can have two possible values, connect and disconnect,instance
represents MQTT adapter instance.occurred_at
is in Epoch UNIX Time Stamp format.If you want to integrate through docker-compose.yml you can use mainflux-es-redis
service. Just connect to it and consume events from Redis Stream named mainflux.mqtt
.
Example of connect event:
1) 1) \"1693312937469-0\"\n2) 1) \"thing_id\"\n 1) \"76a58221-e319-492a-be3e-b3d15631e92a\"\n 2) \"event_type\"\n 3) \"connect\"\n 4) \"instance\"\n 5) \"\"\n 6) \"occurred_at\"\n 7) \"1693312937469719069\"\n
Example of disconnect event:
1) 1) \"1693312937471-0\"\n2) 1) \"thing_id\"\n 2) \"76a58221-e319-492a-be3e-b3d15631e92a\"\n 3) \"event_type\"\n 4) \"disconnect\"\n 5) \"instance\"\n 6) \"\"\n 7) \"occurred_at\"\n 8) \"1693312937471064150\"\n
"},{"location":"getting-started/","title":"Getting Started","text":""},{"location":"getting-started/#step-1-run-the-system","title":"Step 1 - Run the System","text":"Before proceeding, install the following prerequisites:
Once everything is installed, execute the following command from project root:
make run\n
This will start Mainflux docker composition, which will output the logs from the containers.
"},{"location":"getting-started/#step-2-install-the-cli","title":"Step 2 - Install the CLI","text":"Open a new terminal from which you can interact with the running Mainflux system. The easiest way to do this is by using the Mainflux CLI, which can be downloaded as a tarball from GitHub (here we use release 0.14.0
but be sure to use the latest CLI release):
wget -O- https://github.com/mainflux/mainflux/releases/download/0.14.0/mainflux-cli_0.14.0_linux-amd64.tar.gz | tar xvz -C $GOBIN\n
Make sure that $GOBIN
is added to your $PATH
so that mainflux-cli
command can be accessible system-wide
Build mainflux-cli
if the pre-built CLI is not compatible with your OS, i.e MacOS. Please see the CLI for further details.
Once installed, you can use the CLI to quick-provision the system for testing:
mainflux-cli provision test\n
This command actually creates a temporary testing user, logs it in, then creates two things and two channels on behalf of this user. This quickly provisions a Mainflux system with one simple testing scenario.
You can read more about system provisioning in the dedicated Provisioning chapter
Output of the command follows this pattern:
{\n \"created_at\": \"2023-04-04T08:02:47.686337Z\",\n \"credentials\": {\n \"identity\": \"crazy_feistel@email.com\",\n \"secret\": \"12345678\"\n },\n \"id\": \"0216df07-8f08-40ef-ba91-ff0e700f387a\",\n \"name\": \"crazy_feistel\",\n \"status\": \"enabled\",\n \"updated_at\": \"2023-04-04T08:02:47.686337Z\"\n}\n\n\n{\n \"access_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw\",\n \"access_type\": \"Bearer\",\n \"refresh_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA2ODE3NjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJyZWZyZXNoIn0.3xcrkIBbi2a8firNHtnK6I8sBBOgrQ6XBa3x7cybKc6omOuqrkkNjXGjKU9tgShvjpfCWT48AR1VqO_VxJxL8g\"\n}\n\n\n[\n {\n \"created_at\": \"2023-04-04T08:02:47.81865461Z\",\n \"credentials\": {\n \"secret\": \"fc9473d8-6756-4fcc-968f-ea43cd0b803b\"\n },\n \"id\": \"5d5e593b-7629-4cc3-bebc-b20d8ab9dbef\",\n \"name\": \"d0\",\n \"owner\": \"0216df07-8f08-40ef-ba91-ff0e700f387a\",\n \"status\": \"enabled\",\n \"updated_at\": \"2023-04-04T08:02:47.81865461Z\"\n },\n {\n \"created_at\": \"2023-04-04T08:02:47.818661382Z\",\n \"credentials\": {\n \"secret\": \"56a4b1bd-9750-42b3-a3cb-cf5ee2b86fe4\"\n },\n \"id\": \"45048a8e-c602-4e91-9556-a9d3af6617fb\",\n \"name\": \"d1\",\n \"owner\": \"0216df07-8f08-40ef-ba91-ff0e700f387a\",\n \"status\": \"enabled\",\n \"updated_at\": \"2023-04-04T08:02:47.818661382Z\"\n }\n]\n\n\n[\n {\n \"created_at\": \"2023-04-04T08:02:47.857619Z\",\n \"id\": \"a31e16f8-343c-4366-8b4f-c95e190937f4\",\n \"name\": \"c0\",\n \"owner_id\": \"0216df07-8f08-40ef-ba91-ff0e700f387a\",\n \"status\": \"enabled\",\n \"updated_at\": \"2023-04-04T08:02:47.857619Z\"\n },\n {\n \"created_at\": \"2023-04-04T08:02:47.867336Z\",\n \"id\": \"e20ad0bb-c490-47dd-9366-fb8ffd56c5dc\",\n \"name\": \"c1\",\n \"owner_id\": \"0216df07-8f08-40ef-ba91-ff0e700f387a\",\n \"status\": \"enabled\",\n \"updated_at\": \"2023-04-04T08:02:47.867336Z\"\n }\n]\n\n
In the Mainflux system terminal (where docker compose is running) you should see following logs:
...\nmainflux-users | {\"level\":\"info\",\"message\":\"Method register_client with id 0216df07-8f08-40ef-ba91-ff0e700f387a using token took 87.335902ms to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.722776862Z\"}\nmainflux-users | {\"level\":\"info\",\"message\":\"Method issue_token of type Bearer for client crazy_feistel@email.com took 55.342161ms to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.783884818Z\"}\nmainflux-users | {\"level\":\"info\",\"message\":\"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 1.389463ms to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.817018631Z\"}\nmainflux-things | {\"level\":\"info\",\"message\":\"Method create_things 2 things using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 48.137759ms to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.853310066Z\"}\nmainflux-users | {\"level\":\"info\",\"message\":\"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 302.571\u00b5s to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.856820523Z\"}\nmainflux-things | {\"level\":\"info\",\"message\":\"Method create_channel for 2 channels using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 15.340692ms to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.872089509Z\"}\nmainflux-users | {\"level\":\"info\",\"message\":\"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 271.162\u00b5s to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.875812318Z\"}\nmainflux-things | {\"level\":\"info\",\"message\":\"Method add_policy for client with id 5d5e593b-7629-4cc3-bebc-b20d8ab9dbef using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 28.632906ms to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.904041832Z\"}\nmainflux-users | {\"level\":\"info\",\"message\":\"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 269.959\u00b5s to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.906989497Z\"}\nmainflux-things | {\"level\":\"info\",\"message\":\"Method add_policy for client with id 5d5e593b-7629-4cc3-bebc-b20d8ab9dbef using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 6.303771ms to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.910594262Z\"}\nmainflux-users | {\"level\":\"info\",\"message\":\"Method identify for token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw with id 0216df07-8f08-40ef-ba91-ff0e700f387a took 364.448\u00b5s to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.912905436Z\"}\nmainflux-things | {\"level\":\"info\",\"message\":\"Method add_policy for client with id 45048a8e-c602-4e91-9556-a9d3af6617fb using token eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA1OTYyNjcsImlhdCI6MTY4MDU5NTM2NywiaWRlbnRpdHkiOiJjcmF6eV9mZWlzdGVsQGVtYWlsLmNvbSIsImlzcyI6ImNsaWVudHMuYXV0aCIsInN1YiI6IjAyMTZkZjA3LThmMDgtNDBlZi1iYTkxLWZmMGU3MDBmMzg3YSIsInR5cGUiOiJhY2Nlc3MifQ.EpaFDcRjYAHwqhejLfay5ju8L1a7VdhXKohUlwTv7YTeOK-ClfNNx6KznV05Swdj6lgvbmVAfe0wz2JMpfMjdw took 7.73352ms to complete without errors.\",\"ts\":\"2023-04-04T08:02:47.920205467Z\"}\n...\n\n
This proves that these provisioning commands were sent from the CLI to the Mainflux system.
"},{"location":"getting-started/#step-4-send-messages","title":"Step 4 - Send Messages","text":"Once system is provisioned, a thing
can start sending messages on a channel
:
mainflux-cli messages send <channel_id> '[{\"bn\":\"some-base-name:\",\"bt\":1.276020076001e+09, \"bu\":\"A\",\"bver\":5, \"n\":\"voltage\",\"u\":\"V\",\"v\":120.1}, {\"n\":\"current\",\"t\":-5,\"v\":1.2}, {\"n\":\"current\",\"t\":-4,\"v\":1.3}]' <thing_secret>\n
For example:
mainflux-cli messages send a31e16f8-343c-4366-8b4f-c95e190937f4 '[{\"bn\":\"some-base-name:\",\"bt\":1.276020076001e+09, \"bu\":\"A\",\"bver\":5, \"n\":\"voltage\",\"u\":\"V\",\"v\":120.1}, {\"n\":\"current\",\"t\":-5,\"v\":1.2}, {\"n\":\"current\",\"t\":-4,\"v\":1.3}]' fc9473d8-6756-4fcc-968f-ea43cd0b803b\n
In the Mainflux system terminal you should see following logs:
...\nmainflux-things | {\"level\":\"info\",\"message\":\"Method authorize_by_key for channel with id a31e16f8-343c-4366-8b4f-c95e190937f4 by client with secret fc9473d8-6756-4fcc-968f-ea43cd0b803b took 7.048706ms to complete without errors.\",\"ts\":\"2023-04-04T08:06:09.750992633Z\"}\nmainflux-broker | [1] 2023/04/04 08:06:09.753072 [TRC] 192.168.144.11:60616 - cid:10 - \"v1.18.0:go\" - <<- [PUB channels.a31e16f8-343c-4366-8b4f-c95e190937f4 261]\nmainflux-broker | [1] 2023/04/04 08:06:09.754037 [TRC] 192.168.144.11:60616 - cid:10 - \"v1.18.0:go\" - <<- MSG_PAYLOAD: [\"\\n$a31e16f8-343c-4366-8b4f-c95e190937f4\\x1a$5d5e593b-7629-4cc3-bebc-b20d8ab9dbef\\\"\\x04http*\\xa6\\x01[{\\\"bn\\\":\\\"some-base-name:\\\",\\\"bt\\\":1.276020076001e+09, \\\"bu\\\":\\\"A\\\",\\\"bver\\\":5, \\\"n\\\":\\\"voltage\\\",\\\"u\\\":\\\"V\\\",\\\"v\\\":120.1}, {\\\"n\\\":\\\"current\\\",\\\"t\\\":-5,\\\"v\\\":1.2}, {\\\"n\\\":\\\"current\\\",\\\"t\\\":-4,\\\"v\\\":1.3}]0\\xd9\\xe6\\x8b\\xc9\u00d8\\xab\\xa9\\x17\"]\nmainflux-broker | [1] 2023/04/04 08:06:09.755550 [TRC] 192.168.144.13:58572 - cid:8 - \"v1.18.0:go\" - ->> [MSG channels.a31e16f8-343c-4366-8b4f-c95e190937f4 1 261]\nmainflux-http | {\"level\":\"info\",\"message\":\"Method publish to channel a31e16f8-343c-4366-8b4f-c95e190937f4 took 15.979094ms to complete without errors.\",\"ts\":\"2023-04-04T08:06:09.75232571Z\"}\n...\n
This proves that messages have been correctly sent through the system via the protocol adapter (mainflux-http
).
Mainflux can be easily deployed on Kubernetes platform by using Helm Chart from official Mainflux DevOps GitHub repository.
"},{"location":"kubernetes/#prerequisites","title":"Prerequisites","text":"Kubernetes is an open source container orchestration engine for automating deployment, scaling, and management of containerised applications. Install it locally or have access to a cluster. Follow these instructions if you need more information.
"},{"location":"kubernetes/#kubectl","title":"Kubectl","text":"Kubectl is official Kubernetes command line client. Follow these instructions to install it.
Regarding the cluster control with kubectl
, default config .yaml
file should be ~/.kube/config
.
Helm is the package manager for Kubernetes. Follow these instructions to install it.
"},{"location":"kubernetes/#stable-helm-repository","title":"Stable Helm Repository","text":"Add a stable chart repository:
helm repo add stable https://charts.helm.sh/stable\n
Add a bitnami chart repository:
helm repo add bitnami https://charts.bitnami.com/bitnami\n
"},{"location":"kubernetes/#nginx-ingress-controller","title":"Nginx Ingress Controller","text":"Follow these instructions to install it or:
helm install ingress-nginx ingress-nginx/ingress-nginx --version 3.26.0 --create-namespace -n ingress-nginx\n
"},{"location":"kubernetes/#deploying-mainflux","title":"Deploying Mainflux","text":"Get Helm charts from Mainflux DevOps GitHub repository:
git clone https://github.com/mainflux/devops.git\ncd devops/charts/mainflux\n
Update the on-disk dependencies to mirror Chart.yaml:
helm dependency update\n
If you didn't already have namespace created you should do it with:
kubectl create namespace mf\n
Deploying release named mainflux
in namespace named mf
is done with just:
helm install mainflux . -n mf\n
Mainflux is now deployed on your Kubernetes.
"},{"location":"kubernetes/#customizing-installation","title":"Customizing Installation","text":"You can override default values while installing with --set
option. For example, if you want to specify ingress hostname and pull latest
tag of users
image:
helm install mainflux -n mf --set ingress.hostname='example.com' --set users.image.tag='latest'\n
Or if release is already installed, you can update it:
helm upgrade mainflux -n mf --set ingress.hostname='example.com' --set users.image.tag='latest'\n
The following table lists the configurable parameters and their default values.
Parameter Description Default defaults.logLevel Log level debug defaults.image.pullPolicy Docker Image Pull Policy IfNotPresent defaults.image.repository Docker Image Repository mainflux defaults.image.tag Docker Image Tag 0.13.0 defaults.replicaCount Replicas of MQTT adapter, Things, Envoy and Authn 3 defaults.messageBrokerUrl Message broker URL, the default is NATS Url nats://nats:4222 defaults.jaegerPort Jaeger port 6831 nginxInternal.mtls.tls TLS secret which contains the server cert/key nginxInternal.mtls.intermediateCrt Generic secret which contains the intermediate cert used to verify clients ingress.enabled Should the Nginx Ingress be created true ingress.hostname Hostname for the Nginx Ingress ingress.tls.hostname Hostname of the Nginx Ingress certificate ingress.tls.secret TLS secret for the Nginx Ingress messageBroker.maxPayload Maximum payload size in bytes that the Message Broker server, if it is NATS, server will accept 268435456 messageBroker.replicaCount Message Broker replicas 3 users.dbPort Users service DB port 5432 users.httpPort Users service HTTP port 9000 things.dbPort Things service DB port 5432 things.httpPort Things service HTTP port 9001 things.authGrpcPort Things service Auth gRPC port 7000 things.authHttpPort Things service Auth HTTP port 9002 things.redisESPort Things service Redis Event Store port 6379 things.redisCachePort Things service Redis Auth Cache port 6379 adapter_http.httpPort HTTP adapter port 8185 mqtt.proxy.mqttPort MQTT adapter proxy port 1884 mqtt.proxy.wsPort MQTT adapter proxy WS port 8081 mqtt.broker.mqttPort MQTT adapter broker port 1883 mqtt.broker.wsPort MQTT adapter broker WS port 8080 mqtt.broker.persistentVolume.size MQTT adapter broker data Persistent Volume size 5Gi mqtt.redisESPort MQTT adapter Event Store port 6379 mqtt.redisCachePort MQTT adapter Redis Auth Cache port 6379 adapter_coap.udpPort CoAP adapter UDP port 5683 ui.port UI port 3000 bootstrap.enabled Enable bootstrap service false bootstrap.dbPort Bootstrap service DB port 5432 bootstrap.httpPort Bootstrap service HTTP port 9013 bootstrap.redisESPort Bootstrap service Redis Event Store port 6379 influxdb.enabled Enable InfluxDB reader & writer false influxdb.dbPort InfluxDB port 8086 influxdb.writer.httpPort InfluxDB writer HTTP port 9006 influxdb.reader.httpPort InfluxDB reader HTTP port 9005 adapter_opcua.enabled Enable OPC-UA adapter false adapter_opcua.httpPort OPC-UA adapter HTTP port 8188 adapter_opcua.redisRouteMapPort OPC-UA adapter Redis Auth Cache port 6379 adapter_lora.enabled Enable LoRa adapter false adapter_lora.httpPort LoRa adapter HTTP port 8187 adapter_lora.redisRouteMapPort LoRa adapter Redis Auth Cache port 6379 twins.enabled Enable twins service false twins.dbPort Twins service DB port 27017 twins.httpPort Twins service HTTP port 9021 twins.redisCachePort Twins service Redis Cache port 6379All Mainflux services (both core and add-ons) can have their logLevel
, image.pullPolicy
, image.repository
and image.tag
overridden.
Mainflux Core is a minimalistic set of required Mainflux services. They are all installed by default:
Mainflux Add-ons are optional services that are disabled by default. Find in Configuration table parameters for enabling them, i.e. to enable influxdb reader & writer you should run helm install
with --set influxdb=true
. List of add-ons services in charts:
By default scale of MQTT adapter, Things, Envoy, Authn and the Message Broker will be set to 3. It's recommended that you set this values to number of your nodes in Kubernetes cluster, i.e. --set defaults.replicaCount=3 --set messageBroker.replicaCount=3
To send MQTT messages to your host on ports 1883
and 8883
some additional steps are required in configuring NGINX Ingress Controller.
NGINX Ingress Controller uses ConfigMap to expose TCP and UDP services. That ConfigMaps are included in helm chart in ingress.yaml file assuming that location of ConfigMaps should be ingress-nginx/tcp-services
and ingress-nginx/udp-services
. These locations was set with --tcp-services-configmap
and --udp-services-configmap
flags and you can check it in deployment of Ingress Controller or add it there in args section for nginx-ingress-controller if it's not already specified. This is explained in NGINX Ingress documentation
Also, these three ports need to be exposed in the Service defined for the Ingress. You can do that with command that edit your service:
kubectl edit svc -n ingress-nginx nginx-ingress-ingress-nginx-controller
and add in spec->ports:
- name: mqtt\n port: 1883\n protocol: TCP\n targetPort: 1883\n- name: mqtts\n port: 8883\n protocol: TCP\n targetPort: 8883\n- name: coap\n port: 5683\n protocol: UDP\n targetPort: 5683\n
"},{"location":"kubernetes/#tls-mtls","title":"TLS & mTLS","text":"For testing purposes you can generate certificates as explained in detail in authentication chapter of this document. So, you can use this script and after replacing all localhost
with your hostname, run:
make ca\nmake server_cert\nmake thing_cert KEY=<thing_secret>\n
you should get in certs
folder these certificates that we will use for setting up TLS and mTLS:
ca.crt\nca.key\nca.srl\nmainflux-server.crt\nmainflux-server.key\nthing.crt\nthing.key\n
Create kubernetes secrets using those certificates with running commands from secrets script. In this example secrets are created in mf
namespace:
kubectl -n mf create secret tls mainflux-server --key mainflux-server.key --cert mainflux-server.crt\n\nkubectl -n mf create secret generic ca --from-file=ca.crt\n
You can check if they are succesfully created:
kubectl get secrets -n mf\n
And now set ingress.hostname, ingress.tls.hostname to your hostname and ingress.tls.secret to mainflux-server
and after helm update you have secured ingress with TLS certificate.
For mTLS you need to set nginx_internal.mtls.tls=\"mainflux-server\"
and nginx_internal.mtls.intermediate_crt=\"ca\"
.
Now you can test sending mqtt message with this parameters:
mosquitto_pub -d -L mqtts://<thing_id>:<thing_secret>@example.com:8883/channels/<channel_id>/messages --cert thing.crt --key thing.key --cafile ca.crt -m \"test-message\"\n
"},{"location":"lora/","title":"LoRa","text":"Bridging with LoRaWAN Networks can be done over the lora-adapter. This service sits between Mainflux and LoRa Server and just forwards the messages from one system to another via MQTT protocol, using the adequate MQTT topics and in the good message format (JSON and SenML), i.e. respecting the APIs of both systems.
LoRa Server is used for connectivity layer. Specially for the LoRa Gateway Bridge service, which abstracts the SemTech packet-forwarder UDP protocol into JSON over MQTT. But also for the LoRa Server service, responsible of the de-duplication and handling of uplink frames received by the gateway(s), handling of the LoRaWAN mac-layer and scheduling of downlink data transmissions. Finally the Lora App Server services is used to interact with the system.
"},{"location":"lora/#run-lora-server","title":"Run Lora Server","text":"Before to run the lora-adapter
you must install and run LoRa Server. First, execute the following command:
go get github.com/brocaar/loraserver-docker\n
Once everything is installed, execute the following command from the LoRa Server project root:
docker-compose up\n
Troubleshouting: Mainflux and LoRa Server use their own MQTT brokers which by default occupy MQTT port 1883
. If both are ran on the same machine different ports must be used. You can fix this on Mainflux side by configuring the environment variable MF_MQTT_ADAPTER_MQTT_PORT
.
Now that both systems are running you must provision LoRa Server, which offers for integration with external services, a RESTful and gRPC API. You can do it as well over the LoRa App Server, which is good example of integration.
network session key
and application session key
of your Device. You can generate and copy them on your device configuration or you can use your own pre generated keys and set them using the LoRa App Server UI. Device connect through OTAA. Make sure that loraserver device-profile is using same release as device. If MAC version is 1.0.X, application key = app_key
and app_eui = deviceEUI
. If MAC version is 1.1 or ABP both parameters will be needed, APP_key and Network key.Once everything is running and the LoRa Server is provisioned, execute the following command from Mainflux project root to run the lora-adapter:
docker-compose -f docker/addons/lora-adapter/docker-compose.yml up -d\n
Troubleshouting: The lora-adapter subscribes to the LoRa Server MQTT broker and will fail if the connection is not established. You must ensure that the environment variable MF_LORA_ADAPTER_MESSAGES_URL
is propertly configured.
Remark: By defaut, MF_LORA_ADAPTER_MESSAGES_URL
is set as tcp://lora.mqtt.mainflux.io:1883
in the docker-compose.yml file of the adapter. If you run the composition without configure this variable you will start to receive messages from our demo server.
The lora-adapter use Redis database to create a route map between both systems. As in Mainflux we use Channels to connect Things, LoRa Server uses Applications to connect Devices.
The lora-adapter uses the matadata of provision events emitted by Mainflux system to update his route map. For that, you must provision Mainflux Channels and Things with an extra metadata key in the JSON Body of the HTTP request. It must be a JSON object with key lora
which value is another JSON object. This nested JSON object should contain app_id
or dev_eui
field. In this case app_id
or dev_eui
must be an existent Lora application ID or device EUI:
Channel structure:
{\n \"name\": \"<channel name>\",\n \"metadata:\": {\n \"lora\": {\n \"app_id\": \"<application ID>\"\n }\n }\n}\n
Thing structure:
{\n \"type\": \"device\",\n \"name\": \"<thing name>\",\n \"metadata:\": {\n \"lora\": {\n \"dev_eui\": \"<device EUI>\"\n }\n }\n}\n
"},{"location":"lora/#messaging","title":"Messaging","text":"To forward LoRa messages the lora-adapter subscribes to topics applications/+/devices/+
of the LoRa Server MQTT broker. It verifies the app_id
and the dev_eui
of received messages. If the mapping exists it uses corresponding Channel ID
and Thing ID
to sign and forwards the content of the LoRa message to the Mainflux message broker.
Once a channel is provisioned and thing is connected to it, it can start to publish messages on the channel. The following sections will provide an example of message publishing for each of the supported protocols.
"},{"location":"messaging/#http","title":"HTTP","text":"To publish message over channel, thing should send following request:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Content-Type: application/senml+json\" -H \"Authorization: Thing <thing_secret>\" https://localhost/http/channels/<channel_id>/messages -d '[{\"bn\":\"some-base-name:\",\"bt\":1.276020076001e+09, \"bu\":\"A\",\"bver\":5, \"n\":\"voltage\",\"u\":\"V\",\"v\":120.1}, {\"n\":\"current\",\"t\":-5,\"v\":1.2}, {\"n\":\"current\",\"t\":-4,\"v\":1.3}]'\n
Note that if you're going to use senml message format, you should always send messages as an array.
For more information about the HTTP messaging service API, please check out the API documentation.
"},{"location":"messaging/#mqtt","title":"MQTT","text":"To send and receive messages over MQTT you could use Mosquitto tools, or Paho if you want to use MQTT over WebSocket.
To publish message over channel, thing should call following command:
mosquitto_pub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages -h localhost -m '[{\"bn\":\"some-base-name:\",\"bt\":1.276020076001e+09, \"bu\":\"A\",\"bver\":5, \"n\":\"voltage\",\"u\":\"V\",\"v\":120.1}, {\"n\":\"current\",\"t\":-5,\"v\":1.2}, {\"n\":\"current\",\"t\":-4,\"v\":1.3}]'\n
To subscribe to channel, thing should call following command:
mosquitto_sub -u <thing_id> -P <thing_secret> -t channels/<channel_id>/messages -h localhost\n
If you want to use standard topic such as channels/<channel_id>/messages
with SenML content type (JSON or CBOR), you should use following topic channels/<channel_id>/messages
.
If you are using TLS to secure MQTT connection, add --cafile docker/ssl/certs/ca.crt
to every command.
CoAP adapter implements CoAP protocol using underlying UDP and according to RFC 7252. To send and receive messages over CoAP, you can use CoAP CLI. To set the add-on, please follow the installation instructions provided here.
Examples:
coap-cli get channels/<channel_id>/messages/subtopic -auth <thing_secret> -o\n
coap-cli post channels/<channel_id>/messages/subtopic -auth <thing_secret> -d \"hello world\"\n
coap-cli post channels/<channel_id>/messages/subtopic -auth <thing_secret> -d \"hello world\" -h 0.0.0.0 -p 1234\n
To send a message, use POST
request. To subscribe, send GET
request with Observe option (flag o
) set to false. There are two ways to unsubscribe:
GET
request with Observe option set to true.RST
message as a response to CONF
message received by the server.The most of the notifications received from the Adapter are non-confirmable. By RFC 7641:
Server must send a notification in a confirmable message instead of a non-confirmable message at least every 24 hours. This prevents a client that went away or is no longer interested from remaining in the list of observers indefinitely.
CoAP Adapter sends these notifications every 12 hours. To configure this period, please check adapter documentation If the client is no longer interested in receiving notifications, the second scenario described above can be used to unsubscribe.
"},{"location":"messaging/#websocket","title":"WebSocket","text":"To publish and receive messages over channel using web socket, you should first send handshake request to /channels/<channel_id>/messages
path. Don't forget to send Authorization
header with thing authorization token. In order to pass message content type to WS adapter you can use Content-Type
header.
If you are not able to send custom headers in your handshake request, send them as query parameter authorization
and content-type
. Then your path should look like this /channels/<channel_id>/messages?authorization=<thing_secret>&content-type=<content-type>
.
If you are using the docker environment prepend the url with ws
. So for example /ws/channels/<channel_id>/messages?authorization=<thing_secret>&content-type=<content-type>
.
const WebSocket = require(\"ws\");\n// do not verify self-signed certificates if you are using one\nprocess.env.NODE_TLS_REJECT_UNAUTHORIZED = \"0\";\n// c02ff576-ccd5-40f6-ba5f-c85377aad529 is an example of a thing_auth_key\nconst ws = new WebSocket(\n \"ws://localhost:8186/ws/channels/1/messages?authorization=c02ff576-ccd5-40f6-ba5f-c85377aad529\"\n);\nws.on(\"open\", () => {\n ws.send(\"something\");\n});\nws.on(\"message\", (data) => {\n console.log(data);\n});\nws.on(\"error\", (e) => {\n console.log(e);\n});\n
"},{"location":"messaging/#basic-golang-example","title":"Basic golang example","text":"package main\n\nimport (\n \"log\"\n \"os\"\n \"os/signal\"\n \"time\"\n\n \"github.com/gorilla/websocket\"\n)\n\nvar done chan interface{}\nvar interrupt chan os.Signal\n\nfunc receiveHandler(connection *websocket.Conn) {\n defer close(done)\n\n for {\n _, msg, err := connection.ReadMessage()\n if err != nil {\n log.Fatal(\"Error in receive: \", err)\n return\n }\n\n log.Printf(\"Received: %s\\n\", msg)\n }\n}\n\nfunc main() {\n done = make(chan interface{})\n interrupt = make(chan os.Signal)\n\n signal.Notify(interrupt, os.Interrupt)\n\n channelId := \"30315311-56ba-484d-b500-c1e08305511f\"\n thingSecret := \"c02ff576-ccd5-40f6-ba5f-c85377aad529\"\n\n socketUrl := \"ws://localhost:8186/channels/\" + channelId + \"/messages/?authorization=\" + thingKey\n\n conn, _, err := websocket.DefaultDialer.Dial(socketUrl, nil)\n if err != nil {\n log.Fatal(\"Error connecting to Websocket Server: \", err)\n } else {\n log.Println(\"Connected to the ws adapter\")\n }\n defer conn.Close()\n\n go receiveHandler(conn)\n\n for {\n select {\n\n case <-interrupt:\n log.Println(\"Interrupt occured, closing the connection...\")\n conn.Close()\n err := conn.WriteMessage(websocket.TextMessage, []byte(\"closed this ws client just now\"))\n if err != nil {\n log.Println(\"Error during closing websocket: \", err)\n return\n }\n\n select {\n case <-done:\n log.Println(\"Receiver Channel Closed! Exiting...\")\n\n case <-time.After(time.Duration(1) * time.Second):\n log.Println(\"Timeout in closing receiving channel. Exiting...\")\n }\n return\n }\n }\n}\n
"},{"location":"messaging/#mqtt-over-ws","title":"MQTT-over-WS","text":"Mainflux also supports MQTT-over-WS, along with pure WS protocol. this bring numerous benefits for IoT applications that are derived from the properties of MQTT - like QoS and PUB/SUB features.
There are 2 reccomended Javascript libraries for implementing browser support for Mainflux MQTT-over-WS connectivity:
As WS is an extension of HTTP protocol, Mainflux exposes it on port 8008
, so it's usage is practically transparent. Additionally, please notice that since same port as for HTTP is used (8008
), and extension URL /mqtt
should be used - i.e. connection URL should be ws://<host_addr>/mqtt
.
For quick testing you can use HiveMQ UI tool.
Here is an example of a browser application connecting to Mainflux server and sending and receiving messages over WebSocket using MQTT.js library:
<script src=\"https://unpkg.com/mqtt/dist/mqtt.min.js\"></script>\n<script>\n // Initialize a mqtt variable globally\n console.log(mqtt)\n\n // connection option\n const options = {\n clean: true, // retain session\n connectTimeout: 4000, // Timeout period\n // Authentication information\n clientId: '14d6c682-fb5a-4d28-b670-ee565ab5866c',\n username: '14d6c682-fb5a-4d28-b670-ee565ab5866c',\n password: 'ec82f341-d4b5-4c77-ae05-34877a62428f',\n }\n\n var channelId = '08676a76-101d-439c-b62e-d4bb3b014337'\n var topic = 'channels/' + channelId + '/messages'\n\n // Connect string, and specify the connection method by the protocol\n // ws Unencrypted WebSocket connection\n // wss Encrypted WebSocket connection\n const connectUrl = 'ws://localhost/mqtt'\n const client = mqtt.connect(connectUrl, options)\n\n client.on('reconnect', (error) => {\n console.log('reconnecting:', error)\n })\n\n client.on('error', (error) => {\n console.log('Connection failed:', error)\n })\n\n client.on('connect', function () {\n console.log('client connected:' + options.clientId)\n client.subscribe(topic, { qos: 0 })\n client.publish(topic, 'WS connection demo!', { qos: 0, retain: false })\n })\n\n client.on('message', function (topic, message, packet) {\n console.log('Received Message:= ' + message.toString() + '\\nOn topic:= ' + topic)\n })\n\n client.on('close', function () {\n console.log(options.clientId + ' disconnected')\n })\n</script>\n
N.B. Eclipse Paho lib adds sub-URL /mqtt
automaticlly, so procedure for connecting to the server can be something like this:
var loc = { hostname: \"localhost\", port: 8008 };\n// Create a client instance\nclient = new Paho.MQTT.Client(loc.hostname, Number(loc.port), \"clientId\");\n// Connect the client\nclient.connect({ onSuccess: onConnect });\n
"},{"location":"messaging/#subtopics","title":"Subtopics","text":"In order to use subtopics and give more meaning to your pub/sub channel, you can simply add any suffix to base /channels/<channel_id>/messages
topic.
Example subtopic publish/subscribe for bedroom temperature would be channels/<channel_id>/messages/bedroom/temperature
.
Subtopics are generic and multilevel. You can use almost any suffix with any depth.
Topics with subtopics are propagated to Message broker in the following format channels.<channel_id>.<optional_subtopic>
.
Our example topic channels/<channel_id>/messages/bedroom/temperature
will be translated to appropriate Message Broker topic channels.<channel_id>.bedroom.temperature
.
You can use multilevel subtopics, that have multiple parts. These parts are separated by .
or /
separators. When you use combination of these two, have in mind that behind the scene, /
separator will be replaced with .
. Every empty part of subtopic will be removed. What this means is that subtopic a///b
is equivalent to a/b
. When you want to subscribe, you can use the default Message Broker, NATS, wildcards *
and >
. Every subtopic part can have *
or >
as it's value, but if there is any other character beside these wildcards, subtopic will be invalid. What this means is that subtopics such as a.b*c.d
will be invalid, while a.b.*.c.d
will be valid.
Authorization is done on channel level, so you only have to have access to channel in order to have access to it's subtopics.
Note: When using MQTT, it's recommended that you use standard MQTT wildcards +
and #
.
Mainflux supports the MQTT protocol for message exchange. MQTT is a lightweight Publish/Subscribe messaging protocol used to connect restricted devices in low bandwidth, high-latency or unreliable networks. The publish-subscribe messaging pattern requires a message broker. The broker is responsible for distributing messages to and from clients connected to the MQTT adapter.
Mainflux supports MQTT version 3.1.1. The MQTT adapter is based on Eclipse Paho MQTT client library. The adapter is configured to use nats as the default MQTT broker, but you can use vernemq too.
"},{"location":"messaging/#configuration","title":"Configuration","text":"In the dev environment, docker profiles are preferred when handling different MQTT and message brokers supported by Mainflux.
Mainflux uses two types of brokers:
MQTT_BROKER
: Handles MQTT communication between MQTT adapters and message broker.MESSAGE_BROKER
: Manages communication between adapters and Mainflux writer services.MQTT_BROKER
can be either vernemq
or nats
. MESSAGE_BROKER
can be either nats
or rabbitmq
.
Each broker has a unique profile for configuration. The available profiles are:
vernemq_nats
: Uses vernemq
as MQTT_BROKER and nats
as MESSAGE_BROKER.vernemq_rabbitmq
: Uses vernemq
as MQTT_BROKER and rabbitmq
as MESSAGE_BROKER.nats_nats
: Uses nats
as both MQTT_BROKER and MESSAGE_BROKER.nats_rabbitmq
: Uses nats
as MQTT_BROKER and rabbitmq
as MESSAGE_BROKER.The following command will run VerneMQ as an MQTT broker and Nats as a message broker:
MF_MQTT_BROKER_TYPE=vernemq MF_BROKER_TYPE=nats make run\n
The following command will run NATS as an MQTT broker and RabbitMQ as a message broker:
MF_MQTT_BROKER_TYPE=nats MF_BROKER_TYPE=rabbitmq make run\n
By default, NATS is used as an MQTT broker and RabbitMQ as a message broker.
"},{"location":"messaging/#nats-mqtt-broker","title":"Nats MQTT Broker","text":"NATS support for MQTT and it is designed to empower users to leverage their existing IoT deployments. NATS offers significant advantages in terms of security and observability when used end-to-end. NATS server as a drop-in replacement for MQTT is compelling. This approach allows you to retain your existing IoT investments while benefiting from NATS' secure, resilient, and scalable access to your streams and services.
"},{"location":"messaging/#architecture","title":"Architecture","text":"To enable MQTT support on NATS, JetStream needs to be enabled. This is done by default in Mainflux. This is because persistence is necessary for sessions and retained messages, even for QoS 0 retained messages. Communication between MQTT and NATS involves creating similar NATS subscriptions when MQTT clients subscribe to topics. This ensures that the interest is registered in the NATS cluster, and messages are delivered accordingly. When MQTT publishers send messages, they are converted to NATS subjects, and matching NATS subscriptions receive the MQTT messages.
NATS supports up to QoS 1 subscriptions, where the server retains messages until it receives the PUBACK for the corresponding packet identifier. If PUBACK is not received within the \"ack_wait\" interval, the message is resent. The maximum value for \"max_ack_pending\" is 65535.
NATS Server persists all sessions, even if they are created with the \"clean session\" flag. Sessions are identified by client identifiers. If two connections attempt to use the same client identifier, the server will close the existing connection and accept the new one, reducing the flapping rate.
NATS supports MQTT in a NATS cluster, with the replication factor automatically set based on cluster size.
"},{"location":"messaging/#limitations","title":"Limitations","text":"VerneMQ is a powerful MQTT publish/subscribe message broker designed to implement the OASIS industry standard MQTT protocol. It is built to take messaging and IoT applications to the next level by providing a unique set of features related to scalability, reliability, high-performance, and operational simplicity.
Key features of VerneMQ include:
VerneMQ is designed from the ground up to work as a distributed message broker, ensuring continued operation even in the event of node or network failures. It can easily scale both horizontally and vertically to handle large numbers of concurrent clients.
VerneMQ uses a master-less clustering technology, which means there are no special nodes like masters or slaves to consider when adding or removing nodes, making cluster operation safe and simple. This allows MQTT clients to connect to any cluster node and receive messages from any other node. However, it acknowledges the challenges of fulfilling MQTT specification guarantees in a distributed environment, particularly during network partitions.
"},{"location":"messaging/#message-broker","title":"Message Broker","text":"Mainflux supports multiple message brokers for message exchange. Message brokers are used to distribute messages to and from clients connected to the different protocols adapters and writers. Writers, which are responsible for storing messages in the database, are connected to the message broker using wildcard subscriptions. This means that writers will receive all messages published to the message broker. Clients can subscribe to the message broker using topic and subtopic combinations. The message broker will then forward all messages published to the topic and subtopic combination to the client.
Mainflux supports NATS, RabbitMQ and Kafka as message brokers.
"},{"location":"messaging/#nats-jetstream","title":"NATS JetStream","text":"Since Mainflux supports configurable message brokers, you can use Nats with JetStream enabled as a message broker. To do so, you need to set MF_BROKER_TYPE
to nats
and set MF_NATS_URL
to the url of your nats instance. When using make
command to start Mainflux MF_BROKER_URL
is automatically set to MF_NATS_URL
.
Since Mainflux is using nats:2.9.21-alpine
docker image with the following configuration:
max_payload: 1MB\nmax_connections: 1M\nport: $MF_NATS_PORT\nhttp_port: $MF_NATS_HTTP_PORT\ntrace: true\n\njetstream {\n store_dir: \"/data\"\n cipher: \"aes\"\n key: $MF_NATS_JETSTREAM_KEY\n max_mem: 1G\n}\n
These are the default values but you can change them by editing the configuration file. For more information about nats configuration checkout official nats documentation. The health check endpoint is exposed on MF_NATS_HTTP_PORT
and its /healthz
path.
The main reason for using Nats with JetStream enabled is to have a distributed system with high availability and minimal dependencies. Nats is configure to run as the default message broker, but you can use any other message broker supported by Mainflux. Nats is configured to use JetStream, which is a distributed streaming platform built on top of nats. JetStream is used to store messages and to provide high availability. This makes nats to be used as the default event store, but you can use any other event store supported by Mainflux. Nats with JetStream enabled is also used as a key-value store for caching purposes. This makes nats to be used as the default cache store, but you can use any other cache store supported by Mainflux.
This versatile architecture allows you to use nats alone for the MQTT broker, message broker, event store and cache store. This is the default configuration, but you can use any other MQTT broker, message broker, event store and cache store supported by Mainflux.
"},{"location":"messaging/#rabbitmq","title":"RabbitMQ","text":"Since Mainflux uses a configurable message broker, you can use RabbitMQ as a message broker. To do so, you need to set MF_BROKER_TYPE
to rabbitmq
and set MF_RABBITMQ_URL
to the url of your RabbitMQ instance. When using make
command to start Mainflux MF_BROKER_URL
is automatically set to MF_RABBITMQ_URL
.
Since Mainflux is using rabbitmq:3.9.20-management-alpine
docker image, the management console is available at port MF_RABBITMQ_HTTP_PORT
Mainflux has one exchange for the entire platform called messages
. This exchange is of type topic
. The exchange is durable
i.e. it will survive broker restarts and remain declared when there are no remaining bindings. The exchange does not auto-delete
when all queues have finished using it. When declaring the exchange no_wait
is set to false
which means that the broker will wait for a confirmation from the server that the exchange was successfully declared. The exchange is not internal
i.e. other exchanges can publish messages to it.
Mainflux uses topic-based routing to route messages to the appropriate queues. The routing key is in the format channels.<channel_id>.<optional_subtopic>
. A few valid routing key examples: channels.318BC587-A68B-40D3-9026-3356FA4E702C
, channels.318BC587-A68B-40D3-9026-3356FA4E702C.bedroom.temperature
.
The AMQP published message doesn't contain any headers. The message body is the payload of the message.
When subscribing to messages from a channel, a queue is created with the name channels.<channel_id>.<optional_subtopic>
. The queue is durable
i.e. it will survive broker restarts and remain declared when there are no remaining consumers or bindings. The queue does not auto-delete
when all consumers have finished using it. The queue is not exclusive
i.e. it can be accessed in other connections. When declaring the queue we set no_wait
to false
which means that the broker waits for a confirmation from the server that the queue was successfully declared. The queue is not passive i.e. the server creates the queue if it does not exist.
The queue is then bound to the exchange with the routing key channels.<channel_id>.<optional_subtopic>
. The binding is not no-wait i.e. the broker waits for a confirmation from the server that the binding was successfully created.
Once this is done, the consumer can start consuming messages from the queue with a specific client ID. The consumer is not no-local
i.e. the server will not send messages to the connection that published them. The consumer is not exclusive
i.e. the queue can be accessed in other connections. The consumer is no-ack
i.e. the server acknowledges deliveries to this consumer prior to writing the delivery to the network.
When Unsubscribing from a channel, the queue is unbound from the exchange and deleted.
For more information and examples checkout official nats.io documentation, official rabbitmq documentation, official vernemq documentation and official kafka documentation.
"},{"location":"opcua/","title":"OPC-UA","text":"Bridging with an OPC-UA Server can be done over the opcua-adapter. This service sits between Mainflux and an OPC-UA Server and just forwards the messages from one system to another.
"},{"location":"opcua/#run-opc-ua-server","title":"Run OPC-UA Server","text":"The OPC-UA Server is used for connectivity layer. It allows various methods to read information from the OPC-UA server and its nodes. The current version of the opcua-adapter still experimental and only Browse
and Subscribe
methods are implemented. Public OPC-UA test servers are available for testing of OPC-UA clients and can be used for development and test purposes.
Execute the following command from Mainflux project root to run the opcua-adapter:
docker-compose -f docker/addons/opcua-adapter/docker-compose.yml up -d\n
"},{"location":"opcua/#route-map","title":"Route Map","text":"The opcua-adapter use Redis database to create a route-map between Mainflux and an OPC-UA Server. As Mainflux use Things and Channels IDs to sign messages, OPC-UA use node ID (node namespace and node identifier combination) and server URI. The adapter route-map associate a Thing ID
with a Node ID
and a Channel ID
with a Server URI
.
The opcua-adapter uses the matadata of provision events emitted by Mainflux system to update its route map. For that, you must provision Mainflux Channels and Things with an extra metadata key in the JSON Body of the HTTP request. It must be a JSON object with key opcua
which value is another JSON object. This nested JSON object should contain node_id
or server_uri
that correspond to an existent OPC-UA Node ID
or Server URI
:
Channel structure:
{\n \"name\": \"<channel name>\",\n \"metadata:\": {\n \"opcua\": {\n \"server_uri\": \"<Server URI>\"\n }\n }\n}\n
Thing structure:
{\n \"name\": \"<thing name>\",\n \"metadata:\": {\n \"opcua\": {\n \"node_id\": \"<Node ID>\"\n }\n }\n}\n
"},{"location":"opcua/#browse","title":"Browse","text":"The opcua-adapter exposes a /browse
HTTP endpoint accessible with method GET
and configurable throw HTTP query parameters server
, namespace
and identifier
. The server URI, the node namespace and the node identifier represent the parent node and are used to fetch the list of available children nodes starting from the given one. By default the root node ID (node namespace and node identifier combination) of an OPC-UA server is ns=0;i=84
. It's also the default value used by the opcua-adapter to do the browsing if only the server URI is specified in the HTTP query.
To create an OPC-UA subscription, user should connect the Thing to the Channel. This will automatically create the connection, enable the redis route-map and run a subscription to the server_uri
and node_id
defined in the Thing and Channel metadata.
To forward OPC-UA messages the opcua-adapter subscribes to the Node ID of an OPC-UA Server URI. It verifies the server_uri
and the node_id
of received messages. If the mapping exists it uses corresponding Channel ID
and Thing ID
to sign and forwards the content of the OPC-UA message to the Mainflux message broker. If the mapping or the connection between the Thing and the Channel don't exist the subscription stops.
Provisioning is a process of configuration of an IoT platform in which system operator creates and sets-up different entities used in the platform - users, groups, channels and things.
"},{"location":"provision/#platform-management","title":"Platform management","text":""},{"location":"provision/#users-management","title":"Users Management","text":""},{"location":"provision/#account-creation","title":"Account Creation","text":"Use the Mainflux API to create user account:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Content-Type: application/json\" https://localhost/users -d '{\"name\": \"John Doe\", \"credentials\": {\"identity\": \"john.doe@email.com\", \"secret\": \"12345678\"}, \"status\": \"enabled\"}'\n
Response should look like this:
HTTP/2 201\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 08:40:39 GMT\ncontent-type: application/json\ncontent-length: 229\nlocation: /users/71db4bb0-591e-4f76-b766-b39ced9fc6b8\nstrict-transport-security: max-age=63072000; includeSubdomains\nx-frame-options: DENY\nx-content-type-options: nosniff\naccess-control-allow-origin: *\naccess-control-allow-methods: *\naccess-control-allow-headers: *\n\n{\n \"id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"name\": \"John Doe\",\n \"credentials\": { \"identity\": \"john.doe@email.com\" },\n \"created_at\": \"2023-04-04T08:40:39.319602Z\",\n \"updated_at\": \"2023-04-04T08:40:39.319602Z\",\n \"status\": \"enabled\"\n}\n
Note that when using official docker-compose
, all services are behind nginx
proxy and all traffic is TLS
encrypted.
In order for this user to be able to authenticate to the system, you will have to create an authorization token for them:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Content-Type: application/json\" https://localhost/users/tokens/issue -d '{\"identity\":\"john.doe@email.com\", \"secret\":\"12345678\"}'\n
Response should look like this:
HTTP/2 201\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 08:40:58 GMT\ncontent-type: application/json\ncontent-length: 709\nstrict-transport-security: max-age=63072000; includeSubdomains\nx-frame-options: DENY\nx-content-type-options: nosniff\naccess-control-allow-origin: *\naccess-control-allow-methods: *\naccess-control-allow-headers: *\n\n{\n \"access_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA2NTE2NTgsImlhdCI6MTY4MDU5NzY1OCwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI3MWRiNGJiMC01OTFlLTRmNzYtYjc2Ni1iMzljZWQ5ZmM2YjgiLCJ0eXBlIjoiYWNjZXNzIn0.E4v79FvikIVs-eYOJAgepBX67G2Pzd9YnC-k3xkVrRQcAjHSdMx685jttr9-uuZtF1q3yIpvV-NdQJ2CG5eDtw\",\n \"refresh_token\": \"eyJhbGciOiJIUzUxMiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2ODA2ODQwNTgsImlhdCI6MTY4MDU5NzY1OCwiaWRlbnRpdHkiOiJqb2huLmRvZUBlbWFpbC5jb20iLCJpc3MiOiJjbGllbnRzLmF1dGgiLCJzdWIiOiI3MWRiNGJiMC01OTFlLTRmNzYtYjc2Ni1iMzljZWQ5ZmM2YjgiLCJ0eXBlIjoicmVmcmVzaCJ9.K236Hz9nsm3dnvW6i7myu5xWcBaNFEMAIeekWkiS_X9y0sQ1LZwl997hkkj4IHFFrbn8KLfmkOfTOqVWgUREFg\",\n \"access_type\": \"Bearer\"\n}\n
For more information about the Users service API, please check out the API documentation.
"},{"location":"provision/#system-provisioning","title":"System Provisioning","text":"Before proceeding, make sure that you have created a new account and obtained an authorization token. You can set your access_token
in the USER_TOKEN
environment variable:
USER_TOKEN=<access_token>\n
"},{"location":"provision/#provisioning-things","title":"Provisioning Things","text":"This endpoint will be depreciated in 1.0.0. It will be replaced with the bulk endpoint currently found at /things/bulk.
Things are created by executing request POST /things
with a JSON payload. Note that you will need user_token
in order to create things that belong to this particular user.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Content-Type: application/json\" -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/things -d '{\"name\":\"weio\"}'\n
Response will contain Location
header whose value represents path to newly created thing:
HTTP/2 201\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:06:50 GMT\ncontent-type: application/json\ncontent-length: 282\nlocation: /things/9dd12d93-21c9-4147-92fe-769386efb6cc\naccess-control-expose-headers: Location\n\n{\n \"id\": \"9dd12d93-21c9-4147-92fe-769386efb6cc\",\n \"name\": \"weio\",\n \"owner\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"credentials\": { \"secret\": \"551e9869-d10f-4682-8319-5a4b18073313\" },\n \"created_at\": \"2023-04-04T09:06:50.460258649Z\",\n \"updated_at\": \"2023-04-04T09:06:50.460258649Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"provision/#bulk-provisioning-things","title":"Bulk Provisioning Things","text":"Multiple things can be created by executing a POST /things/bulk
request with a JSON payload. The payload should contain a JSON array of the things to be created. If there is an error any of the things, none of the things will be created.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Content-Type: application/json\" -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/things/bulk -d '[{\"name\":\"weio\"},{\"name\":\"bob\"}]'\n
The response's body will contain a list of the created things.
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 08:42:04 GMT\ncontent-type: application/json\ncontent-length: 586\naccess-control-expose-headers: Location\n\n{\n \"total\": 2,\n \"things\": [{\n \"id\": \"1b1cd38f-62cd-4f17-b47e-5ff4e97881e8\",\n \"name\": \"weio\",\n \"owner\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"credentials\": { \"secret\": \"43bd950e-0b3f-46f6-a92c-296a6a0bfe66\" },\n \"created_at\": \"2023-04-04T08:42:04.168388927Z\",\n \"updated_at\": \"2023-04-04T08:42:04.168388927Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"b594af97-9550-4b11-86e1-2b6db7e329b9\",\n \"name\": \"bob\",\n \"owner\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"credentials\": { \"secret\": \"9f89f52e-1b06-4416-8294-ae753b0c4bea\" },\n \"created_at\": \"2023-04-04T08:42:04.168390109Z\",\n \"updated_at\": \"2023-04-04T08:42:04.168390109Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
"},{"location":"provision/#retrieving-provisioned-things","title":"Retrieving Provisioned Things","text":"In order to retrieve data of provisioned things that are written in database, you can send following request:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/things\n
Notice that you will receive only those things that were provisioned by user_token
owner.
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 08:42:27 GMT\ncontent-type: application/json\ncontent-length: 570\naccess-control-expose-headers: Location\n\n{\n \"limit\": 10,\n \"total\": 2,\n \"things\": [{\n \"id\": \"1b1cd38f-62cd-4f17-b47e-5ff4e97881e8\",\n \"name\": \"weio\",\n \"owner\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"credentials\": { \"secret\": \"43bd950e-0b3f-46f6-a92c-296a6a0bfe66\" },\n \"created_at\": \"2023-04-04T08:42:04.168388Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"b594af97-9550-4b11-86e1-2b6db7e329b9\",\n \"name\": \"bob\",\n \"owner\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"credentials\": { \"secret\": \"9f89f52e-1b06-4416-8294-ae753b0c4bea\" },\n \"created_at\": \"2023-04-04T08:42:04.16839Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
You can specify offset
and limit
parameters in order to fetch a specific subset of things. In that case, your request should look like:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/things?offset=0&limit=5\n
You can specify name
and/or metadata
parameters in order to fetch specific subset of things. When specifying metadata you can specify just a part of the metadata JSON you want to match.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/things?offset=0&limit=5&name=\"weio\"\n
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 08:43:09 GMT\ncontent-type: application/json\ncontent-length: 302\naccess-control-expose-headers: Location\n\n{\n \"limit\": 5,\n \"total\": 1,\n \"things\": [{\n \"id\": \"1b1cd38f-62cd-4f17-b47e-5ff4e97881e8\",\n \"name\": \"weio\",\n \"owner\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"credentials\": { \"secret\": \"43bd950e-0b3f-46f6-a92c-296a6a0bfe66\" },\n \"created_at\": \"2023-04-04T08:42:04.168388Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }]\n}\n
If you don't provide them, default values will be used instead: 0 for offset
and 10 for limit
. Note that limit
cannot be set to values greater than 100. Providing invalid values will be considered malformed request.
This is a special endpoint that allows you to disable a thing, soft deleting it from the database. In order to disable you own thing you can send following request:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/things/1b1cd38f-62cd-4f17-b47e-5ff4e97881e8/disable\n
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:00:40 GMT\ncontent-type: application/json\ncontent-length: 277\naccess-control-expose-headers: Location\n\n{\n \"id\": \"1b1cd38f-62cd-4f17-b47e-5ff4e97881e8\",\n \"name\": \"weio\",\n \"owner\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"credentials\": { \"secret\": \"43bd950e-0b3f-46f6-a92c-296a6a0bfe66\" },\n \"created_at\": \"2023-04-04T08:42:04.168388Z\",\n \"updated_at\": \"2023-04-04T08:42:04.168388Z\",\n \"status\": \"disabled\"\n}\n
"},{"location":"provision/#provisioning-channels","title":"Provisioning Channels","text":"This endpoint will be depreciated in 1.0.0. It will be replaced with the bulk endpoint currently found at /channels/bulk.
Channels are created by executing request POST /channels
:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Content-Type: application/json\" -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/channels -d '{\"name\":\"mychan\"}'\n
After sending request you should receive response with Location
header that contains path to newly created channel:
HTTP/2 201\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:18:10 GMT\ncontent-type: application/json\ncontent-length: 235\nlocation: /channels/0a67a8ee-eda9-408e-af83-f895096b7359\naccess-control-expose-headers: Location\n\n{\n \"id\": \"0a67a8ee-eda9-408e-af83-f895096b7359\",\n \"owner_id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"name\": \"mychan\",\n \"created_at\": \"2023-04-04T09:18:10.26603Z\",\n \"updated_at\": \"2023-04-04T09:18:10.26603Z\",\n \"status\": \"enabled\"\n}\n
"},{"location":"provision/#bulk-provisioning-channels","title":"Bulk Provisioning Channels","text":"Multiple channels can be created by executing a POST /things/bulk
request with a JSON payload. The payload should contain a JSON array of the channels to be created. If there is an error any of the channels, none of the channels will be created.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Content-Type: application/json\" -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/channels/bulk -d '[{\"name\":\"joe\"},{\"name\":\"betty\"}]'\n
The response's body will contain a list of the created channels.
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:11:16 GMT\ncontent-type: application/json\ncontent-length: 487\naccess-control-expose-headers: Location\n\n{\n \"channels\": [{\n \"id\": \"5ec1beb9-1b76-47e6-a9ef-baf9e4ae5820\",\n \"owner_id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"name\": \"joe\",\n \"created_at\": \"2023-04-04T09:11:16.131972Z\",\n \"updated_at\": \"2023-04-04T09:11:16.131972Z\",\n \"status\": \"disabled\"\n },\n {\n \"id\": \"ff1316f1-d3c6-4590-8bf3-33774d79eab2\",\n \"owner_id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"name\": \"betty\",\n \"created_at\": \"2023-04-04T09:11:16.138881Z\",\n \"updated_at\": \"2023-04-04T09:11:16.138881Z\",\n \"status\": \"disabled\"\n }\n ]\n}\n
"},{"location":"provision/#retrieving-provisioned-channels","title":"Retrieving Provisioned Channels","text":"In order to retrieve data of provisioned channels that are written in database, you can send following request:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/channels\n
Notice that you will receive only those things that were provisioned by user_token
owner.
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:13:48 GMT\ncontent-type: application/json\ncontent-length: 495\naccess-control-expose-headers: Location\n\n{\n \"total\": 2,\n \"channels\": [{\n \"id\": \"5ec1beb9-1b76-47e6-a9ef-baf9e4ae5820\",\n \"owner_id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"name\": \"joe\",\n \"created_at\": \"2023-04-04T09:11:16.131972Z\",\n \"updated_at\": \"2023-04-04T09:11:16.131972Z\",\n \"status\": \"enabled\"\n },\n {\n \"id\": \"ff1316f1-d3c6-4590-8bf3-33774d79eab2\",\n \"owner_id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"name\": \"betty\",\n \"created_at\": \"2023-04-04T09:11:16.138881Z\",\n \"updated_at\": \"2023-04-04T09:11:16.138881Z\",\n \"status\": \"enabled\"\n }\n ]\n}\n
You can specify offset
and limit
parameters in order to fetch specific subset of channels. In that case, your request should look like:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/channels?offset=0&limit=5\n
If you don't provide them, default values will be used instead: 0 for offset
and 10 for limit
. Note that limit
cannot be set to values greater than 100. Providing invalid values will be considered malformed request.
This is a special endpoint that allows you to disable a channel, soft deleting it from the database. In order to disable you own channel you can send following request:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/channels/5ec1beb9-1b76-47e6-a9ef-baf9e4ae5820/disable\n
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:16:31 GMT\ncontent-type: application/json\ncontent-length: 235\naccess-control-expose-headers: Location\n\n{\n \"id\": \"5ec1beb9-1b76-47e6-a9ef-baf9e4ae5820\",\n \"owner_id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"name\": \"joe\",\n \"created_at\": \"2023-04-04T09:11:16.131972Z\",\n \"updated_at\": \"2023-04-04T09:11:16.131972Z\",\n \"status\": \"disabled\"\n}\n
"},{"location":"provision/#access-control","title":"Access Control","text":"Channel can be observed as a communication group of things. Only things that are connected to the channel can send and receive messages from other things in this channel. Things that are not connected to this channel are not allowed to communicate over it. Users may also be assigned to channels, thus sharing things between users. With the necessary policies in place, users can be granted access to things that are not owned by them.
A user who is the owner of a channel or a user that has been assigned to the channel with the required policy can connect things to the channel. This is equivalent of giving permissions to these things to communicate over given communication group.
To connect a thing to the channel you should send following request:
This endpoint will be depreciated in 1.0.0. It will be replaced with the bulk endpoint found at /connect.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X PUT -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/channels/<channel_id>/things/<thing_id>\n
HTTP/2 201\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:20:23 GMT\ncontent-type: application/json\ncontent-length: 266\naccess-control-expose-headers: Location\n\n{\n \"owner_id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"subject\": \"b594af97-9550-4b11-86e1-2b6db7e329b9\",\n \"object\": \"ff1316f1-d3c6-4590-8bf3-33774d79eab2\",\n \"actions\": [\"m_write\", \"m_read\"],\n \"created_at\": \"2023-04-04T09:20:23.015342Z\",\n \"updated_at\": \"2023-04-04T09:20:23.015342Z\"\n}\n
To connect multiple things to a channel, you can send the following request:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X POST -H \"Content-Type: application/json\" -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/connect -d '{\"channel_ids\":[\"<channel_id>\", \"<channel_id>\"],\"thing_ids\":[\"<thing_id>\", \"<thing_id>\"]}'\n
You can observe which things are connected to specific channel:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/channels/<channel_id>/things\n
Response that you'll get should look like this:
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:53:21 GMT\ncontent-type: application/json\ncontent-length: 254\naccess-control-expose-headers: Location\n\n{\n \"limit\": 10,\n \"total\": 1,\n \"things\": [{\n \"id\": \"b594af97-9550-4b11-86e1-2b6db7e329b9\",\n \"name\": \"bob\",\n \"credentials\": { \"secret\": \"9f89f52e-1b06-4416-8294-ae753b0c4bea\" },\n \"created_at\": \"2023-04-04T08:42:04.16839Z\",\n \"updated_at\": \"0001-01-01T00:00:00Z\",\n \"status\": \"enabled\"\n }]\n}\n
You can observe to which channels is specified thing connected:
curl -s -S -i --cacert docker/ssl/certs/ca.crt -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/things/<thing_id>/channels\n
Response that you'll get should look like this:
HTTP/2 200\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:57:10 GMT\ncontent-type: application/json\ncontent-length: 261\naccess-control-expose-headers: Location\n\n{\n \"total\": 1,\n \"channels\": [{\n \"id\": \"ff1316f1-d3c6-4590-8bf3-33774d79eab2\",\n \"owner_id\": \"71db4bb0-591e-4f76-b766-b39ced9fc6b8\",\n \"name\": \"betty\",\n \"created_at\": \"2023-04-04T09:11:16.138881Z\",\n \"updated_at\": \"2023-04-04T09:11:16.138881Z\",\n \"status\": \"enabled\"\n }]\n}\n
If you want to disconnect your thing from the channel, send following request:
This endpoint will be depreciated in 1.0.0. It will be replaced with the bulk endpoint found at /disconnect.
curl -s -S -i --cacert docker/ssl/certs/ca.crt -X DELETE -H \"Authorization: Bearer $USER_TOKEN\" https://localhost/channels/<channel_id>/things/<thing_id>\n
Response that you'll get should look like this:
HTTP/2 204\nserver: nginx/1.23.3\ndate: Tue, 04 Apr 2023 09:57:53 GMT\naccess-control-expose-headers: Location\n
For more information about the Things service API, please check out the API documentation.
"},{"location":"provision/#provision-service","title":"Provision Service","text":"Provisioning is a process of configuration of an IoT platform in which system operator creates and sets-up different entities used in the platform - users, channels and things. It is part of process of setting up IoT applications where we connect devices on edge with platform in cloud. For provisioning we can use Mainflux CLI for creating users and for each node in the edge (eg. gateway) required number of things, channels, connecting them and creating certificates if needed. Provision service is used to set up initial application configuration once user is created. Provision service creates things, channels, connections and certificates. Once user is created we can use provision to create a setup for edge node in one HTTP request instead of issuing several CLI commands.
Provision service provides an HTTP API to interact with Mainflux.
For gateways to communicate with Mainflux configuration is required (MQTT host, thing, channels, certificates...). Gateway will send a request to Bootstrap service providing <external_id>
and <external_key>
in HTTP request to get the configuration. To make a request to Bootstrap service you can use Agent service on a gateway.
To create bootstrap configuration you can use Bootstrap or Provision
service. Mainflux UI uses Bootstrap service for creating gateway configurations. Provision
service should provide an easy way of provisioning your gateways i.e creating bootstrap configuration and as many things and channels that your setup requires.
Also, you may use provision service to create certificates for each thing. Each service running on gateway may require more than one thing and channel for communication. If, for example, you are using services Agent and Export on a gateway you will need two channels for Agent
(data
and control
) and one thing for Export
. Additionally, if you enabled mTLS each service will need its own thing and certificate for access to Mainflux. Your setup could require any number of things and channels, this kind of setup we can call provision layout
.
Provision service provides a way of specifying this provision layout
and creating a setup according to that layout by serving requests on /mapping
endpoint. Provision layout is configured in config.toml.
The service is configured using the environment variables presented in the following table. Note that any unset variables will be replaced with their default values.
By default, call to /mapping
endpoint will create one thing and two channels (control
and data
) and connect it as this is typical setup required by Agent. If there is a requirement for different provision layout we can use config file in addition to environment variables.
For the purposes of running provision as an add-on in docker composition environment variables seems more suitable. Environment variables are set in .env.
Configuration can be specified in config.toml. Config file can specify all the settings that environment variables can configure and in addition /mapping
endpoint provision layout can be configured.
In config.toml
we can enlist an array of things and channels that we want to create and make connections between them which we call provision layout.
Things Metadata can be whatever suits your needs. Thing that has metadata with external_id
will have bootstrap configuration created, external_id
value will be populated with value from request). Bootstrap configuration can be fetched with Agent. For channel's metadata type
is reserved for control
and data
which we use with Agent.
Example of provision layout below
[bootstrap]\n [bootstrap.content]\n [bootstrap.content.agent.edgex]\n url = \"http://localhost:48090/api/v1/\"\n\n [bootstrap.content.agent.log]\n level = \"info\"\n\n [bootstrap.content.agent.mqtt]\n mtls = false\n qos = 0\n retain = false\n skip_tls_ver = true\n url = \"localhost:1883\"\n\n [bootstrap.content.agent.server]\n nats_url = \"localhost:4222\"\n port = \"9000\"\n\n [bootstrap.content.agent.heartbeat]\n interval = \"30s\"\n\n [bootstrap.content.agent.terminal]\n session_timeout = \"30s\"\n\n [bootstrap.content.export.exp]\n log_level = \"debug\"\n nats = \"nats://localhost:4222\"\n port = \"8172\"\n cache_url = \"localhost:6379\"\n cache_pass = \"\"\n cache_db = \"0\"\n\n [bootstrap.content.export.mqtt]\n ca_path = \"ca.crt\"\n cert_path = \"thing.crt\"\n channel = \"\"\n host = \"tcp://localhost:1883\"\n mtls = false\n password = \"\"\n priv_key_path = \"thing.key\"\n qos = 0\n retain = false\n skip_tls_ver = false\n username = \"\"\n\n [[bootstrap.content.export.routes]]\n mqtt_topic = \"\"\n nats_topic = \"channels\"\n subtopic = \"\"\n type = \"mfx\"\n workers = 10\n\n [[bootstrap.content.export.routes]]\n mqtt_topic = \"\"\n nats_topic = \"export\"\n subtopic = \"\"\n type = \"default\"\n workers = 10\n\n[[things]]\n name = \"thing\"\n\n [things.metadata]\n external_id = \"xxxxxx\"\n\n[[channels]]\n name = \"control-channel\"\n\n [channels.metadata]\n type = \"control\"\n\n[[channels]]\n name = \"data-channel\"\n\n [channels.metadata]\n type = \"data\"\n\n[[channels]]\n name = \"export-channel\"\n\n [channels.metadata]\n type = \"export\"\n
[bootstrap.content]
will be marshalled and saved into content
field in bootstrap configs when request to /mappings
is made, content
field from bootstrap config is used to create Agent
and Export
configuration files upon Agent
fetching bootstrap configuration.
In order to create necessary entities provision service needs to authenticate against Mainflux. To provide authentication credentials to the provision service you can pass it in as an environment variable or in a config file as Mainflux user and password or as API token (that can be issued on /users/tokens/issue
endpoint of users service.
Additionally, users or API token can be passed in Authorization header, this authentication takes precedence over others.
username
, password
- (MF_PROVISION_USER
, MF_PROVISION_PASSWORD
in .env, mf_user
, mf_pass
in config.tomlMF_PROVISION_API_KEY
in .env or config.tomlAuthorization: Bearer Token|ApiKey
- request authorization header containing users token. Check auth.Provision service can be run as a standalone or in docker composition as addon to the core docker composition.
Standalone:
MF_PROVISION_BS_SVC_URL=http://localhost:9013/things \\\nMF_PROVISION_THINGS_LOCATION=http://localhost:9000 \\\nMF_PROVISION_USERS_LOCATION=http://localhost:9002 \\\nMF_PROVISION_CONFIG_FILE=docker/addons/provision/configs/config.toml \\\nbuild/mainflux-provision\n
Docker composition:
docker-compose -f docker/addons/provision/docker-compose.yml up\n
"},{"location":"provision/#provision_1","title":"Provision","text":"For the case that credentials or API token is passed in configuration file or environment variables, call to /mapping
endpoint doesn't require Authentication
header:
curl -s -S -X POST http://localhost:9016/mapping -H 'Content-Type: application/json' -d '{\"external_id\": \"33:52:77:99:43\", \"external_key\": \"223334fw2\"}'\n
In the case that provision service is not deployed with credentials or API key or you want to use user other than one being set in environment (or config file):
curl -s -S -X POST http://localhost:9016/mapping -H \"Authorization: Bearer <token|api_key>\" -H 'Content-Type: application/json' -d '{\"external_id\": \"<external_id>\", \"external_key\": \"<external_key>\"}'\n
Or if you want to specify a name for thing different than in config.toml
you can specify post data as:
{\n \"name\": \"<name>\",\n \"external_id\": \"<external_id>\",\n \"external_key\": \"<external_key>\"\n}\n
Response contains created things, channels and certificates if any:
{\n \"things\": [\n {\n \"id\": \"c22b0c0f-8c03-40da-a06b-37ed3a72c8d1\",\n \"name\": \"thing\",\n \"key\": \"007cce56-e0eb-40d6-b2b9-ed348a97d1eb\",\n \"metadata\": {\n \"external_id\": \"33:52:79:C3:43\"\n }\n }\n ],\n \"channels\": [\n {\n \"id\": \"064c680e-181b-4b58-975e-6983313a5170\",\n \"name\": \"control-channel\",\n \"metadata\": {\n \"type\": \"control\"\n }\n },\n {\n \"id\": \"579da92d-6078-4801-a18a-dd1cfa2aa44f\",\n \"name\": \"data-channel\",\n \"metadata\": {\n \"type\": \"data\"\n }\n }\n ],\n \"whitelisted\": {\n \"c22b0c0f-8c03-40da-a06b-37ed3a72c8d1\": true\n }\n}\n
"},{"location":"provision/#example","title":"Example","text":"Deploy Mainflux UI docker composition as it contains all the required services for provisioning to work ( certs
, bootstrap
and Mainflux core)
git clone https://github.com/mainflux/ui\ncd ui\ndocker-compose -f docker/docker-compose.yml up\n
Create user and obtain access token
mainflux-cli -m https://mainflux.com users create john.doe@email.com 12345678\n\n# Retrieve token\nmainflux-cli -m https://mainflux.com users token john.doe@email.com 12345678\n\ncreated: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTY1ODU3MDUsImlhdCI6MTU5NjU0OTcwNSwiaXNzIjoibWFpbmZsdXguYXV0aG4iLCJzdWIiOiJtaXJrYXNoQGdtYWlsLmNvbSIsInR5cGUiOjB9._vq0zJzFc9tQqc8x74kpn7dXYefUtG9IB0Cb-X2KMK8\n
Put a value of token into environment variable
TOKEN=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTY1ODU3MDUsImlhdCI6MTU5NjU0OTcwNSwiaXNzIjoibWFpbmZsdXguYXV0aG4iLCJzdWIiOiJtaXJrYXNoQGdtYWlsLmNvbSIsInR5cGUiOjB9._vq0zJzFc9tQqc8x74kpn7dXYefUtG9IB0Cb-X2KMK8\n
Make a call to provision endpoint
curl -s -S -X POST http://mainflux.com:9016/mapping -H \"Authorization: Bearer $TOKEN\" -H 'Content-Type: application/json' -d '{\"name\":\"edge-gw\", \"external_id\" : \"gateway\", \"external_key\":\"external_key\" }'\n
To check the results you can make a call to bootstrap endpoint
curl -s -S -X GET http://mainflux.com:9013/things/bootstrap/gateway -H \"Authorization: Thing external_key\" -H 'Content-Type: application/json'\n
Or you can start Agent
with:
git clone https://github.com/mainflux/agent\ncd agent\nmake\nMF_AGENT_BOOTSTRAP_ID=gateway MF_AGENT_BOOTSTRAP_KEY=external_key MF_AGENT_BOOTSTRAP_URL=http://mainflux.ccom:9013/things/bootstrap build/mainflux-agent\n
Agent will retrieve connections parameters and connect to Mainflux cloud.
For more information about the Provision service API, please check out the API documentation.
"},{"location":"security/","title":"Security","text":""},{"location":"security/#server-configuration","title":"Server Configuration","text":""},{"location":"security/#users","title":"Users","text":"If either the cert or key is not set, the server will use insecure transport.
MF_USERS_SERVER_CERT
the path to server certificate in pem format.
MF_USERS_SERVER_KEY
the path to the server key in pem format.
If either the cert or key is not set, the server will use insecure transport.
MF_THINGS_SERVER_CERT
the path to server certificate in pem format.
MF_THINGS_SERVER_KEY
the path to the server key in pem format.
Sometimes it makes sense to run Things as a standalone service to reduce network traffic or simplify deployment. This means that Things service operates only using a single user and is able to authorize it without gRPC communication with Auth service. When running Things in the standalone mode, Auth
and Users
services can be omitted from the deployment. To run service in a standalone mode, set MF_THINGS_STANDALONE_EMAIL
and MF_THINGS_STANDALONE_TOKEN
.
If you wish to secure the gRPC connection to Things
and Users
services you must define the CAs that you trust. This does not support mutual certificate authentication.
MF_HTTP_ADAPTER_CA_CERTS
, MF_MQTT_ADAPTER_CA_CERTS
, MF_WS_ADAPTER_CA_CERTS
, MF_COAP_ADAPTER_CA_CERTS
- the path to a file that contains the CAs in PEM format. If not set, the default connection will be insecure. If it fails to read the file, the adapter will fail to start up.
MF_THINGS_CA_CERTS
- the path to a file that contains the CAs in PEM format. If not set, the default connection will be insecure. If it fails to read the file, the service will fail to start up.
By default, Mainflux will connect to Postgres using insecure transport. If a secured connection is required, you can select the SSL mode and set paths to any extra certificates and keys needed.
MF_USERS_DB_SSL_MODE
the SSL connection mode for Users. MF_USERS_DB_SSL_CERT
the path to the certificate file for Users. MF_USERS_DB_SSL_KEY
the path to the key file for Users. MF_USERS_DB_SSL_ROOT_CERT
the path to the root certificate file for Users.
MF_THINGS_DB_SSL_MODE
the SSL connection mode for Things. MF_THINGS_DB_SSL_CERT
the path to the certificate file for Things. MF_THINGS_DB_SSL_KEY
the path to the key file for Things. MF_THINGS_DB_SSL_ROOT_CERT
the path to the root certificate file for Things.
Supported database connection modes are: disabled
(default), required
, verify-ca
and verify-full
.
By default gRPC communication is not secure as Mainflux system is most often run in a private network behind the reverse proxy.
However, TLS can be activated and configured.
"},{"location":"storage/","title":"Storage","text":"Mainflux supports various storage databases in which messages are stored:
These storages are activated via docker-compose add-ons.
The <project_root>/docker
folder contains an addons
directory. This directory is used for various services that are not core to the Mainflux platform but could be used for providing additional features.
In order to run these services, core services, as well as the network from the core composition, should be already running.
"},{"location":"storage/#writers","title":"Writers","text":"Writers provide an implementation of various message writers
. Message writers are services that consume Mainflux messages, transform them to desired format and store them in specific data store. The path of the configuration file can be set using the following environment variables: MF_CASSANDRA_WRITER_CONFIG_PATH
, MF_POSTGRES_WRITER_CONFIG_PATH
, MF_INFLUX_WRITER_CONFIG_PATH
, MF_MONGO_WRITER_CONFIG_PATH
and MF_TIMESCALE_WRITER_CONFIG_PATH
.
Each writer can filter messages based on subjects list that is set in config.toml
configuration file. If you want to listen on all subjects, just set the field subjects
in the [subscriber]
section as [\"channels.>\"]
, otherwise pass the list of subjects. Here is an example:
[subscriber]\nsubjects = [\"channels.*.messages.bedroom.temperature\",\"channels.*.messages.bedroom.humidity\"]\n
Regarding the Subtopics Section in the messaging page, the example channels/<channel_id>/messages/bedroom/temperature
can be filtered as \"channels.*.bedroom.temperature\"
. The formatting of this filtering list is determined by the default message broker, NATS, format (Subject-Based Messaging & Wildcards).
There are two types of transformers: SenML and JSON. The transformer type is set in configuration file.
For SenML transformer, supported message payload formats are SenML+CBOR and SenML+JSON. They are configurable over content_type
field in the [transformer]
section and expect application/senml+json
or application/senml+cbor
formats. Here is an example:
[transformer]\nformat = \"senml\"\ncontent_type = \"application/senml+json\"\n
Usually, the payload of the IoT message contains message time. It can be in different formats (like base time and record time in the case of SenML) and the message field can be under the arbitrary key. Usually, we would want to map that time to the Mainflux Message field Created and for that reason, we need to configure the Transformer to be able to read the field, parse it using proper format and location (if devices time is different than the service time), and map it to Mainflux Message.
For JSON transformer you can configure time_fields
in the [transformer]
section to use arbitrary fields from the JSON message payload as timestamp. time_fields
is represented by an array of objects with fields field_name
, field_format
and location
that represent respectively the name of the JSON key to use as timestamp, the time format to use for the field value and the time location. Here is an example:
[transformer]\nformat = \"json\"\ntime_fields = [{ field_name = \"seconds_key\", field_format = \"unix\", location = \"UTC\"},\n { field_name = \"millis_key\", field_format = \"unix_ms\", location = \"UTC\"},\n { field_name = \"micros_key\", field_format = \"unix_us\", location = \"UTC\"},\n { field_name = \"nanos_key\", field_format = \"unix_ns\", location = \"UTC\"}]\n
JSON transformer can be used for any JSON payload. For the messages that contain JSON array as the root element, JSON Transformer does normalization of the data: it creates a separate JSON message for each JSON object in the root. In order to be processed and stored properly, JSON messages need to contain message format information. For the sake of simplicity, nested JSON objects are flatten to a single JSON object in InfluxDB, using composite keys separated by the /
separator. This implies that the separator character (/
) is not allowed in the JSON object key while using InfluxDB. Apart from InfluxDB, separator character (/
) usage in the JSON object key is permitted, since other Writer types do not flat the nested JSON objects. For example, the following JSON object:
{\n \"name\": \"name\",\n \"id\": 8659456789564231564,\n \"in\": 3.145,\n \"alarm\": true,\n \"ts\": 1571259850000,\n \"d\": {\n \"tmp\": 2.564,\n \"hmd\": 87,\n \"loc\": {\n \"x\": 1,\n \"y\": 2\n }\n }\n}\n
for InfluxDB will be transformed to:
{\n \"name\": \"name\",\n \"id\": 8659456789564231564,\n \"in\": 3.145,\n \"alarm\": true,\n \"ts\": 1571259850000,\n \"d/tmp\": 2.564,\n \"d/hmd\": 87,\n \"d/loc/x\": 1,\n \"d/loc/y\": 2\n}\n
while for other Writers it will preserve its original format.
The message format is stored in the subtopic. It's the last part of the subtopic. In the example:
http://localhost:8008/channels/<channelID>/messages/home/temperature/myFormat\n
the message format is myFormat
. It can be any valid subtopic name, JSON transformer is format-agnostic. The format is used by the JSON message consumers so that they can process the message properly. If the format is not present (i.e. message subtopic is empty), JSON Transformer will report an error. Message writers will store the message(s) in the table/collection/measurement (depending on the underlying database) with the name of the format (which in the example is myFormat
). Mainflux writers will try to save any format received (whether it will be successful depends on the writer implementation and the underlying database), but it's recommended that publishers don't send different formats to the same subtopic.
From the project root execute the following command:
docker-compose -f docker/addons/influxdb-writer/docker-compose.yml up -d\n
This will install and start:
Those new services will take some additional ports:
To access Influx-UI, navigate to http://localhost:8086
and login with: mainflux
, password: mainflux
./docker/addons/cassandra-writer/init.sh\n
Please note that Cassandra may not be suitable for your testing environment because of its high system requirements.
"},{"location":"storage/#mongodb-and-mongodb-writer","title":"MongoDB and MongoDB Writer","text":"docker-compose -f docker/addons/mongodb-writer/docker-compose.yml up -d\n
MongoDB default port (27017) is exposed, so you can use various tools for database inspection and data visualization.
"},{"location":"storage/#postgresql-and-postgresql-writer","title":"PostgreSQL and PostgreSQL Writer","text":"docker-compose -f docker/addons/postgres-writer/docker-compose.yml up -d\n
Postgres default port (5432) is exposed, so you can use various tools for database inspection and data visualization.
"},{"location":"storage/#timescale-and-timescale-writer","title":"Timescale and Timescale Writer","text":"docker-compose -f docker/addons/timescale-writer/docker-compose.yml up -d\n
Timescale default port (5432) is exposed, so you can use various tools for database inspection and data visualization.
"},{"location":"storage/#readers","title":"Readers","text":"Readers provide an implementation of various message readers
. Message readers are services that consume normalized (in SenML
format) Mainflux messages from data storage and opens HTTP API for message consumption. Installing corresponding writer before reader is implied.
Each of the Reader services exposes the same HTTP API for fetching messages on its default port.
To read sent messages on channel with id channel_id
you should send GET
request to /channels/<channel_id>/messages
with thing access token in Authorization
header. That thing must be connected to channel with channel_id
Response should look like this:
HTTP/1.1 200 OK\nContent-Type: application/json\nDate: Tue, 18 Sep 2018 18:56:19 GMT\nContent-Length: 228\n\n{\n \"messages\": [\n {\n \"Channel\": 1,\n \"Publisher\": 2,\n \"Protocol\": \"mqtt\",\n \"Name\": \"name:voltage\",\n \"Unit\": \"V\",\n \"Value\": 5.6,\n \"Time\": 48.56\n },\n {\n \"Channel\": 1,\n \"Publisher\": 2,\n \"Protocol\": \"mqtt\",\n \"Name\": \"name:temperature\",\n \"Unit\": \"C\",\n \"Value\": 24.3,\n \"Time\": 48.56\n }\n ]\n}\n
Note that you will receive only those messages that were sent by authorization token's owner. You can specify offset
and limit
parameters in order to fetch specific group of messages. An example of HTTP request looks like:
curl -s -S -i -H \"Authorization: Thing <thing_secret>\" http://localhost:<service_port>/channels/<channel_id>/messages?offset=0&limit=5&format=<subtopic>\n
If you don't provide offset
and limit
parameters, default values will be used instead: 0 for offset
and 10 for limit
. The format
parameter indicates the last subtopic of the message. As indicated under the Writers
section, the message format is stored in the subtopic as the last part of the subtopic. In the example:
http://localhost:<service_port>/channels/<channelID>/messages/home/temperature/myFormat\n
the message format is myFormat
and the value for format=<subtopic>
is format=myFormat
.
To start InfluxDB reader, execute the following command:
docker-compose -f docker/addons/influxdb-reader/docker-compose.yml up -d\n
"},{"location":"storage/#cassandra-reader","title":"Cassandra Reader","text":"To start Cassandra reader, execute the following command:
docker-compose -f docker/addons/cassandra-reader/docker-compose.yml up -d\n
"},{"location":"storage/#mongodb-reader","title":"MongoDB Reader","text":"To start MongoDB reader, execute the following command:
docker-compose -f docker/addons/mongodb-reader/docker-compose.yml up -d\n
"},{"location":"storage/#postgresql-reader","title":"PostgreSQL Reader","text":"To start PostgreSQL reader, execute the following command:
docker-compose -f docker/addons/postgres-reader/docker-compose.yml up -d\n
"},{"location":"storage/#timescale-reader","title":"Timescale Reader","text":"To start Timescale reader, execute the following command:
docker-compose -f docker/addons/timescale-reader/docker-compose.yml up -d\n
"},{"location":"tracing/","title":"Tracing","text":"Distributed tracing is a method of profiling and monitoring applications. It can provide valuable insight when optimizing and debugging an application. Mainflux includes the Jaeger open tracing framework as a service with its stack by default.
"},{"location":"tracing/#launch","title":"Launch","text":"The Jaeger service will launch with the rest of the Mainflux services. All services can be launched using:
make run\n
The Jaeger UI can then be accessed at http://localhost:16686
from a browser. Details about the UI can be found in Jaeger's official documentation.
The Jaeger service can be disabled by using the scale
flag with docker-compose up
and setting the jaeger container to 0.
--scale jaeger=0\n
Jaeger uses 5 ports within the Mainflux framework. These ports can be edited in the .env
file.
Mainflux provides for tracing of messages ingested into the mainflux platform. The message metadata such as topic, sub-topic, subscriber and publisher is also included in traces. .
The messages are tracked from end to end from the point they are published to the consumers where they are stored.
"},{"location":"tracing/#example","title":"Example","text":"As an example for using Jaeger, we can look at the traces generated after provisioning the system. Make sure to have ran the provisioning script that is part of the Getting Started step.
Before getting started with Jaeger, there are a few terms that are important to define. A trace
can be thought of as one transaction within the system. A trace is made up of one or more spans
. These are the individual steps that must be taken for a trace to perform its action. A span has tags
and logs
associated with it. Tags are key-value pairs that provide information such as a database type or http method. Tags are useful when filtering traces in the Jaeger UI. Logs are structured messages used at specific points in the trace's transaction. These are typically used to indicate an error.
When first navigating to the Jaeger UI, it will present a search page with an empty results section. There are multiple fields to search from including service, operation, tags and time frames. Clicking Find Traces
will fill the results section with traces containing the selected fields.
The top of the results page includes a scatter plot of the traces and their durations. This can be very useful for finding a trace with a prolonged runtime. Clicking on one of the points will open the trace page of that trace.
Below the graph is a list of all the traces with a summary of its information. Each trace shows a unique identifier, the overall runtime, the spans it is composed of and when it was ran. Clicking on one of the traces will open the trace page of that trace.
The trace page provides a more detailed breakdown of the individual span calls. The top of the page shows a chart breaking down what spans the trace is spending its time in. Below the chart are the individual spans and their details. Expanding the spans shows any tags associated with that span and process information. This is also where any errors or logs seen while running the span will be reported.
This is just a brief overview of the possibilities of Jaeger and its UI. For more information, check out Jaeger's official documentation.
"},{"location":"twins/","title":"Twins Service","text":"Mainflux twins service is built on top of the Mainflux platform. In order to fully understand what follows, be sure to get acquainted with overall Mainflux architecture.
"},{"location":"twins/#what-is-digital-twin","title":"What is Digital Twin","text":"Twin refers to a digital representation of a real world data system consisting of possibly multiple data sources/producers and/or destinations/consumers (data agents).
For example, an industrial machine can use multiple protocols such as MQTT, OPC-UA, a regularly updated machine hosted CSV file etc. to send measurement data - such as flowrate, material temperature, etc. - and state metadata - such as engine and chassis temperature, engine rotations per seconds, identity of the current human operator, etc. - as well as to receive control, i.e. actuation messages - such as, turn on/off light, increment/decrement borer speed, change blades, etc.
Digital twin is an abstract - and usually less detailed - digital replica of a real world system such as the industrial machine we have just described. It is used to create and store information about system's state at any given moment, to compare system state over a given period of time - so-called diffs or deltas - as well as to control agents composing the system.
"},{"location":"twins/#mainflux-digital-twin","title":"Mainflux Digital Twin","text":"Any data producer or data consumer - which we refer to here collectively as data agent - or an interrelated system of data agents, can be represented by means of possibly multiple Mainflux things, channels and subtopics. For example, an OPC-UA server can be represented as a Mainflux thing and its nodes can be represented as multiple Mainflux channels or multiple subtopics of a single Mainflux channel. What is more, you can invert the representation: you can represent server as a channel and node as things. Mainflux platform is meant to empower you with the freedom of expression so you can make a digital representation of any data agent according to your needs.
Although this works well, satisfies the requirements of a wide variety of use cases and corresponds to the intended use of Mainlfux IoT platform, this setup can be insufficient in two important ways. Firstly, different things, channels, and their connections - i.e. Mainflux representations of different data agent structures - are unrelated to each other, i.e. they do not form a meaningful whole and, as a consequence, they do not represent a single unified system. Secondly, the semantic aspect, i.e. the meaning of different things and channels is not transparent and defined by the sole use of Mainflux platform entities (channels and things).
Certainly, we can try to describe things and channels connections and relations as well as their meaning - i.e. their role, position, function in the overall system - by means of their metadata. Although this might work well - with a proviso of a lot of additional effort of writing the relatively complex code to create and parse metadata - it is not a practical approach and we still don't get - at least not out of the box - a readable and useful overview of the system as a whole. Also, this approach does not enable us to answer a simple but very important question, i.e. what was the detailed state of a complete system at a certain moment in time.
To overcome these problems, Mainflux comes with a digital twin service. The twins service is built on top of the Mainflux platform and relies on its architecture and entities, more precisely, on Mainflux users, things and channels. The primary task of the twin service is to handle Mainflux digital twins. Mainflux digital twin consists of three parts:
Mainflux Twins service depends on the Mainflux IoT platform. The following diagram shows the place of the twins service in the overall Mainflux architecture:
You use an HTTP client to communicate with the twins service. Every request sent to the twins service is authenticated by users service. Twins service handles CRUD requests and creates, retrieves, updates and deletes twins. The CRUD operations depend on the database to persist and fetch already saved twins.
Twins service listens to the message broker server and intercepts messages passing via the message broker. Every Mainflux message contains information about subchannel and topic used to send a message. Twins service compares this info with attribute definitions of twins persisted in the database, fetches the corresponding twins and updates their respective states.
Before we dwell into twin's anatomy, it is important to realize that in order to use Mainflux twin service, you have to provision Mainflux things and channels and you have to connect things and channels beforehand. As you go, you can modify your things, channels and connections and you can modify your digital twin to reflect these modifications, but you have to have at least a minimal setup in order to use the twin service.
"},{"location":"twins/#twins-anatomy","title":"Twin's Anatomy","text":"Twin's general information stores twin's owner email - owner is represented by Mainflux user -, twin's ID (unique) and name (not necessarily unique), twin's creation and update dates as well as twin's revision number. The latter refers to the sequential number of twin's definition.
The twin's definition is meant to be a semantic representation of system's data sources and consumers (data agents). Each data data agent is represented by means of attribute. Attribute consists of data agent's name, Mainflux channel and subtopic over which it communicates. Nota bene: each attribute is uniquely defined by the combination of channel and subtopic and we cannot have two or more attributes with the same channel and subtopic in the same definition.
Attributes have a state persistence flag that determines whether the messages communicated by its corresponding channel and subtopic trigger the creation of a new twin state. Twin states are persisted in the separate collection of the same database. Currently, twins service uses the MongoDB. InfluxDB support for twins and states persistence is on the roadmap.
When we define our digital twin, its JSON representation might look like this:
{\n \"owner\": \"john.doe@email.net\",\n \"id\": \"a838e608-1c1b-4fea-9c34-def877473a89\",\n \"name\": \"grinding machine 2\",\n \"revision\": 2,\n \"created\": \"2020-05-05T08:41:39.142Z\",\n \"updated\": \"2020-05-05T08:49:12.638Z\",\n \"definitions\": [\n {\n \"id\": 0,\n \"created\": \"2020-05-05T08:41:39.142Z\",\n \"attributes\": [],\n \"delta\": 1000000\n },\n {\n \"id\": 1,\n \"created\": \"2020-05-05T08:46:23.207Z\",\n \"attributes\": [\n {\n \"name\": \"engine temperature\",\n \"channel\": \"7ef6c61c-f514-402f-af4b-2401b588bfec\",\n \"subtopic\": \"engine\",\n \"persist_state\": true\n },\n {\n \"name\": \"chassis temperature\",\n \"channel\": \"7ef6c61c-f514-402f-af4b-2401b588bfec\",\n \"subtopic\": \"chassis\",\n \"persist_state\": true\n },\n {\n \"name\": \"rotations per sec\",\n \"channel\": \"a254032a-8bb6-4973-a2a1-dbf80f181a86\",\n \"subtopic\": \"\",\n \"persist_state\": false\n }\n ],\n \"delta\": 1000000\n },\n {\n \"id\": 2,\n \"created\": \"2020-05-05T08:49:12.638Z\",\n \"attributes\": [\n {\n \"name\": \"engine temperature\",\n \"channel\": \"7ef6c61c-f514-402f-af4b-2401b588bfec\",\n \"subtopic\": \"engine\",\n \"persist_state\": true\n },\n {\n \"name\": \"chassis temperature\",\n \"channel\": \"7ef6c61c-f514-402f-af4b-2401b588bfec\",\n \"subtopic\": \"chassis\",\n \"persist_state\": true\n },\n {\n \"name\": \"rotations per sec\",\n \"channel\": \"a254032a-8bb6-4973-a2a1-dbf80f181a86\",\n \"subtopic\": \"\",\n \"persist_state\": false\n },\n {\n \"name\": \"precision\",\n \"channel\": \"aed0fbca-0d1d-4b07-834c-c62f31526569\",\n \"subtopic\": \"\",\n \"persist_state\": true\n }\n ],\n \"delta\": 1000000\n }\n ]\n}\n
In the case of the upper twin, we begin with an empty definition, the one with the id
0 - we could have provided the definition immediately - and over the course of time, we add two more definitions, so the total number of revisions is 2 (revision index is zero-based). We decide not to persist the number of rotation per second in our digital twin state. We define it, though, because the definition and its attributes are used not only to define states of a complex data agent system, but also to define the semantic structure of the system. delta
is the number of nanoseconds used to determine whether the received attribute value should trigger the generation of the new state or the same state should be updated. The reason for this is to enable state sampling over the regular intervals of time. Discarded values are written to the database of choice by Mainflux writers, so you can always retrieve intermediate values if need be.
states are created according to the twin's current definition. A state stores twin's ID - every state belongs to a single twin -, its own ID, twin's definition number, creation date and the actual payload. Payload is a set of key-value pairs where a key corresponds to the attribute name and a value is the actual value of the attribute. All SenML value types are supported.
A JSON representation of a partial list of states might look like this:
{\n \"total\": 28,\n \"offset\": 10,\n \"limit\": 5,\n \"states\": [\n {\n \"twin_id\": \"a838e608-1c1b-4fea-9c34-def877473a89\",\n \"id\": 11,\n \"definition\": 1,\n \"created\": \"2020-05-05T08:49:06.167Z\",\n \"payload\": {\n \"chassis temperature\": 0.3394171011161684,\n \"engine temperature\": 0.3814079472715233\n }\n },\n {\n \"twin_id\": \"a838e608-1c1b-4fea-9c34-def877473a89\",\n \"id\": 12,\n \"definition\": 1,\n \"created\": \"2020-05-05T08:49:12.168Z\",\n \"payload\": {\n \"chassis temperature\": 1.8116442194724147,\n \"engine temperature\": 0.3814079472715233\n }\n },\n {\n \"twin_id\": \"a838e608-1c1b-4fea-9c34-def877473a89\",\n \"id\": 13,\n \"definition\": 2,\n \"created\": \"2020-05-05T08:49:18.174Z\",\n \"payload\": {\n \"chassis temperature\": 1.8116442194724147,\n \"engine temperature\": 3.2410616702795814\n }\n },\n {\n \"twin_id\": \"a838e608-1c1b-4fea-9c34-def877473a89\",\n \"id\": 14,\n \"definition\": 2,\n \"created\": \"2020-05-05T08:49:19.145Z\",\n \"payload\": {\n \"chassis temperature\": 3.2410616702795814,\n \"engine temperature\": 3.2410616702795814,\n \"precision\": 8.922156489392854\n }\n },\n {\n \"twin_id\": \"a838e608-1c1b-4fea-9c34-def877473a89\",\n \"id\": 15,\n \"definition\": 2,\n \"created\": \"2020-05-05T08:49:24.178Z\",\n \"payload\": {\n \"chassis temperature\": 0.8694383878692546,\n \"engine temperature\": 3.2410616702795814,\n \"precision\": 8.922156489392854\n }\n }\n ]\n}\n
As you can see, the first two states correspond to the definition 1 and have only two attributes in the payload. The rest of the states is based on the definition 2, where we persist three attributes and, as a consequence, its payload consists of three entries.
"},{"location":"twins/#authentication-and-authorization","title":"Authentication and Authorization","text":"Twin belongs to a Mainflux user, tenant representing a physical person or an organization. User owns Mainflux things and channels as well as twins. Mainflux user provides authorization and authentication mechanisms to twins service. For more details, please see Authentication with Mainflux keys. In practical terms, we need to create a Mainflux user in order to create a digital twin. Every twin belongs to exactly one user. One user can have unlimited number of digital twins.
"},{"location":"twins/#twin-operations","title":"Twin Operations","text":"For more information about the Twins service HTTP API please refer to the twins service OpenAPI file.
"},{"location":"twins/#create-and-update","title":"Create and Update","text":"Create and update requests use JSON body to initialize and modify, respectively, twin. You can omit every piece of data - every key-value pair - from the JSON. However, you must send at least an empty JSON body.
{\n \"name\": \"twin_name\",\n \"definition\": {\n \"attributes\": [\n {\n \"name\": \"temperature\",\n \"channel\": \"3b57b952-318e-47b5-b0d7-a14f61ecd03b\",\n \"subtopic\": \"temperature\",\n \"persist_state\": true\n },\n {\n \"name\": \"humidity\",\n \"channel\": \"3b57b952-318e-47b5-b0d7-a14f61ecd03b\",\n \"subtopic\": \"humidity\",\n \"persist_state\": false\n },\n {\n \"name\": \"pressure\",\n \"channel\": \"7ef6c61c-f514-402f-af4b-2401b588bfec\",\n \"subtopic\": \"\",\n \"persist_state\": true\n }\n ],\n \"delta\": 1\n }\n}\n
"},{"location":"twins/#create","title":"Create","text":"Create request uses POST HTTP method to create twin:
curl -s -S -i -X POST -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" http://localhost:9018/twins -d '<twin_data>'\n
If you do not suply the definition, the empty definition of the form
{\n \"id\": 0,\n \"created\": \"2020-05-05T08:41:39.142Z\",\n \"attributes\": [],\n \"delta\": 1000000\n}\n
will be created.
"},{"location":"twins/#update","title":"Update","text":"curl -s -S -i -X PUT -H \"Content-Type: application/json\" -H \"Authorization: Bearer <user_token>\" http://localhost:9018/<twin_id> -d '<twin_data>'\n
"},{"location":"twins/#view","title":"View","text":"curl -s -S -i -X GET -H \"Authorization: Bearer <user_token>\" http://localhost:9018/twins/<twin_id>\n
"},{"location":"twins/#list","title":"List","text":"curl -s -S -i -X GET -H \"Authorization: Bearer <user_token>\" http://localhost:9018/twins\n
List requests accepts limit
and offset
query parameters. By default, i.e. without these parameters, list requests fetches only first ten twins (or less, if there are less then ten twins).
You can fetch twins [10-29) like this:
curl -s -S -i -X GET -H \"Authorization: Bearer <user_token>\" http://localhost:9018/twins?offset=10&limit=20\n
"},{"location":"twins/#delete","title":"Delete","text":"curl -s -S -i -X DELETE -H \"Authorization: Bearer <user_token>\" http://localhost:9018/twins/<twin_id>\n
"},{"location":"twins/#states-operations","title":"STATES operations","text":""},{"location":"twins/#list_1","title":"List","text":"curl -s -S -i -X GET -H \"Authorization: Bearer <user_token>\" http://localhost:9018/states/<twin_id>\n
List requests accepts limit
and offset
query parameters. By default, i.e. without these parameters, list requests fetches only first ten states (or less, if there are less then ten states).
You can fetch states [10-29) like this:
curl -s -S -i -X GET -H \"Authorization: Bearer <user_token>\" http://localhost:9018/states/<twin_id>?offset=10&limit=20\n
"},{"location":"twins/#notifications","title":"Notifications","text":"Every twin and states related operation publishes notifications via the message broker. To fully understand what follows, please read about Mainflux messaging capabilities and utilities.
In order to pick up this notifications, you have to create a Mainflux channel before you start the twins service and inform the twins service about the channel by means of the environment variable, like this:
export MF_TWINS_CHANNEL_ID=f6894dfe-a7c9-4eef-a614-637ebeea5b4c\n
The twins service will use this channel to publish notifications related to twins creation, update, retrieval and deletion. It will also publish notifications related to state saving into the database.
All notifications will be published on the following message broker subject:
channels.<mf_twins_channel_id>.<optional_subtopic>\n
where <optional_subtopic>
is one of the following:
create.success
- on successful twin creation,create.failure
- on twin creation failure,update.success
- on successful twin update,update.failure
- on twin update failure,get.success
- on successful twin retrieval,get.failure
- on twin retrieval failure,remove.success
- on successful twin deletion,remove.failure
- on twin deletion failure,save.success
- on successful state savesave.failure
- on state save failure.Normally, you can use the default message broker, NATS, wildcards. In order to learn more about Mainflux channel topic composition, please read about subtopics. The point is to be able to subscribe to all subjects or any operation pair subject - e.g. create.success/failure - by means of one connection and read all messages or all operation related messages in the context of the same subscription.
Since messages published on message broker are republished on any other protocol supported by Mainflux - HTTP, MQTT, CoAP and WS - you can use any supported protocol client to pick up notifications.
"}]} \ No newline at end of file diff --git a/security/index.html b/security/index.html new file mode 100644 index 00000000..4637afa0 --- /dev/null +++ b/security/index.html @@ -0,0 +1,860 @@ + + + + + + + + + + + + + + + + + + + + + + + + + +If either the cert or key is not set, the server will use insecure transport.
+MF_USERS_SERVER_CERT
the path to server certificate in pem format.
MF_USERS_SERVER_KEY
the path to the server key in pem format.
If either the cert or key is not set, the server will use insecure transport.
+MF_THINGS_SERVER_CERT
the path to server certificate in pem format.
MF_THINGS_SERVER_KEY
the path to the server key in pem format.
Sometimes it makes sense to run Things as a standalone service to reduce network traffic or simplify deployment. This means that Things service operates only using a single user and is able to authorize it without gRPC communication with Auth service. When running Things in the standalone mode, Auth
and Users
services can be omitted from the deployment.
+To run service in a standalone mode, set MF_THINGS_STANDALONE_EMAIL
and MF_THINGS_STANDALONE_TOKEN
.
If you wish to secure the gRPC connection to Things
and Users
services you must define the CAs that you trust. This does not support mutual certificate authentication.
MF_HTTP_ADAPTER_CA_CERTS
, MF_MQTT_ADAPTER_CA_CERTS
, MF_WS_ADAPTER_CA_CERTS
, MF_COAP_ADAPTER_CA_CERTS
- the path to a file that contains the CAs in PEM format. If not set, the default connection will be insecure. If it fails to read the file, the adapter will fail to start up.
MF_THINGS_CA_CERTS
- the path to a file that contains the CAs in PEM format. If not set, the default connection will be insecure. If it fails to read the file, the service will fail to start up.
By default, Mainflux will connect to Postgres using insecure transport. +If a secured connection is required, you can select the SSL mode and set paths to any extra certificates and keys needed.
+MF_USERS_DB_SSL_MODE
the SSL connection mode for Users.
+MF_USERS_DB_SSL_CERT
the path to the certificate file for Users.
+MF_USERS_DB_SSL_KEY
the path to the key file for Users.
+MF_USERS_DB_SSL_ROOT_CERT
the path to the root certificate file for Users.
MF_THINGS_DB_SSL_MODE
the SSL connection mode for Things.
+MF_THINGS_DB_SSL_CERT
the path to the certificate file for Things.
+MF_THINGS_DB_SSL_KEY
the path to the key file for Things.
+MF_THINGS_DB_SSL_ROOT_CERT
the path to the root certificate file for Things.
Supported database connection modes are: disabled
(default), required
, verify-ca
and verify-full
.
By default gRPC communication is not secure as Mainflux system is most often run in a private network behind the reverse proxy.
+However, TLS can be activated and configured.
+ + + + + + +Mainflux supports various storage databases in which messages are stored:
+These storages are activated via docker-compose add-ons.
+The <project_root>/docker
folder contains an addons
directory. This directory is used for various services that are not core to the Mainflux platform but could be used for providing additional features.
In order to run these services, core services, as well as the network from the core composition, should be already running.
+Writers provide an implementation of various message writers
. Message writers are services that consume Mainflux messages, transform them to desired format and store them in specific data store. The path of the configuration file can be set using the following environment variables: MF_CASSANDRA_WRITER_CONFIG_PATH
, MF_POSTGRES_WRITER_CONFIG_PATH
, MF_INFLUX_WRITER_CONFIG_PATH
, MF_MONGO_WRITER_CONFIG_PATH
and MF_TIMESCALE_WRITER_CONFIG_PATH
.
Each writer can filter messages based on subjects list that is set in config.toml
configuration file. If you want to listen on all subjects, just set the field subjects
in the [subscriber]
section as ["channels.>"]
, otherwise pass the list of subjects. Here is an example:
[subscriber]
+subjects = ["channels.*.messages.bedroom.temperature","channels.*.messages.bedroom.humidity"]
+
+Regarding the Subtopics Section in the messaging page, the example channels/<channel_id>/messages/bedroom/temperature
can be filtered as "channels.*.bedroom.temperature"
. The formatting of this filtering list is determined by the default message broker, NATS, format (Subject-Based Messaging & Wildcards).
There are two types of transformers: SenML and JSON. The transformer type is set in configuration file.
+For SenML transformer, supported message payload formats are SenML+CBOR and SenML+JSON. They are configurable over content_type
field in the [transformer]
section and expect application/senml+json
or application/senml+cbor
formats. Here is an example:
[transformer]
+format = "senml"
+content_type = "application/senml+json"
+
+Usually, the payload of the IoT message contains message time. It can be in different formats (like base time and record time in the case of SenML) and the message field can be under the arbitrary key. Usually, we would want to map that time to the Mainflux Message field Created and for that reason, we need to configure the Transformer to be able to read the field, parse it using proper format and location (if devices time is different than the service time), and map it to Mainflux Message.
+For JSON transformer you can configure time_fields
in the [transformer]
section to use arbitrary fields from the JSON message payload as timestamp. time_fields
is represented by an array of objects with fields field_name
, field_format
and location
that represent respectively the name of the JSON key to use as timestamp, the time format to use for the field value and the time location. Here is an example:
[transformer]
+format = "json"
+time_fields = [{ field_name = "seconds_key", field_format = "unix", location = "UTC"},
+ { field_name = "millis_key", field_format = "unix_ms", location = "UTC"},
+ { field_name = "micros_key", field_format = "unix_us", location = "UTC"},
+ { field_name = "nanos_key", field_format = "unix_ns", location = "UTC"}]
+
+JSON transformer can be used for any JSON payload. For the messages that contain JSON array as the root element, JSON Transformer does normalization of the data: it creates a separate JSON message for each JSON object in the root. In order to be processed and stored properly, JSON messages need to contain message format information. For the sake of simplicity, nested JSON objects are flatten to a single JSON object in InfluxDB, using composite keys separated by the /
separator. This implies that the separator character (/
) is not allowed in the JSON object key while using InfluxDB. Apart from InfluxDB, separator character (/
) usage in the JSON object key is permitted, since other Writer types do not flat the nested JSON objects. For example, the following JSON object:
{
+ "name": "name",
+ "id": 8659456789564231564,
+ "in": 3.145,
+ "alarm": true,
+ "ts": 1571259850000,
+ "d": {
+ "tmp": 2.564,
+ "hmd": 87,
+ "loc": {
+ "x": 1,
+ "y": 2
+ }
+ }
+}
+
+for InfluxDB will be transformed to:
+{
+ "name": "name",
+ "id": 8659456789564231564,
+ "in": 3.145,
+ "alarm": true,
+ "ts": 1571259850000,
+ "d/tmp": 2.564,
+ "d/hmd": 87,
+ "d/loc/x": 1,
+ "d/loc/y": 2
+}
+
+while for other Writers it will preserve its original format.
+The message format is stored in the subtopic. It's the last part of the subtopic. In the example:
+http://localhost:8008/channels/<channelID>/messages/home/temperature/myFormat
+
+the message format is myFormat
. It can be any valid subtopic name, JSON transformer is format-agnostic. The format is used by the JSON message consumers so that they can process the message properly. If the format is not present (i.e. message subtopic is empty), JSON Transformer will report an error. Message writers will store the message(s) in the table/collection/measurement (depending on the underlying database) with the name of the format (which in the example is myFormat
). Mainflux writers will try to save any format received (whether it will be successful depends on the writer implementation and the underlying database), but it's recommended that publishers don't send different formats to the same subtopic.
From the project root execute the following command:
+docker-compose -f docker/addons/influxdb-writer/docker-compose.yml up -d
+
+This will install and start:
+Those new services will take some additional ports:
+To access Influx-UI, navigate to http://localhost:8086
and login with: mainflux
, password: mainflux
./docker/addons/cassandra-writer/init.sh
+
+Please note that Cassandra may not be suitable for your testing environment because of its high system requirements.
+docker-compose -f docker/addons/mongodb-writer/docker-compose.yml up -d
+
+MongoDB default port (27017) is exposed, so you can use various tools for database inspection and data visualization.
+docker-compose -f docker/addons/postgres-writer/docker-compose.yml up -d
+
+Postgres default port (5432) is exposed, so you can use various tools for database inspection and data visualization.
+docker-compose -f docker/addons/timescale-writer/docker-compose.yml up -d
+
+Timescale default port (5432) is exposed, so you can use various tools for database inspection and data visualization.
+Readers provide an implementation of various message readers
. Message readers are services that consume normalized (in SenML
format) Mainflux messages from data storage and opens HTTP API for message consumption. Installing corresponding writer before reader is implied.
Each of the Reader services exposes the same HTTP API for fetching messages on its default port.
+To read sent messages on channel with id channel_id
you should send GET
request to /channels/<channel_id>/messages
with thing access token in Authorization
header. That thing must be connected to channel with channel_id
Response should look like this:
+HTTP/1.1 200 OK
+Content-Type: application/json
+Date: Tue, 18 Sep 2018 18:56:19 GMT
+Content-Length: 228
+
+{
+ "messages": [
+ {
+ "Channel": 1,
+ "Publisher": 2,
+ "Protocol": "mqtt",
+ "Name": "name:voltage",
+ "Unit": "V",
+ "Value": 5.6,
+ "Time": 48.56
+ },
+ {
+ "Channel": 1,
+ "Publisher": 2,
+ "Protocol": "mqtt",
+ "Name": "name:temperature",
+ "Unit": "C",
+ "Value": 24.3,
+ "Time": 48.56
+ }
+ ]
+}
+
+Note that you will receive only those messages that were sent by authorization token's owner. You can specify offset
and limit
parameters in order to fetch specific group of messages. An example of HTTP request looks like:
curl -s -S -i -H "Authorization: Thing <thing_secret>" http://localhost:<service_port>/channels/<channel_id>/messages?offset=0&limit=5&format=<subtopic>
+
+If you don't provide offset
and limit
parameters, default values will be used instead: 0 for offset
and 10 for limit
. The format
parameter indicates the last subtopic of the message. As indicated under the Writers
section, the message format is stored in the subtopic as the last part of the subtopic. In the example:
http://localhost:<service_port>/channels/<channelID>/messages/home/temperature/myFormat
+
+the message format is myFormat
and the value for format=<subtopic>
is format=myFormat
.
To start InfluxDB reader, execute the following command:
+docker-compose -f docker/addons/influxdb-reader/docker-compose.yml up -d
+
+To start Cassandra reader, execute the following command:
+docker-compose -f docker/addons/cassandra-reader/docker-compose.yml up -d
+
+To start MongoDB reader, execute the following command:
+docker-compose -f docker/addons/mongodb-reader/docker-compose.yml up -d
+
+To start PostgreSQL reader, execute the following command:
+docker-compose -f docker/addons/postgres-reader/docker-compose.yml up -d
+
+To start Timescale reader, execute the following command:
+docker-compose -f docker/addons/timescale-reader/docker-compose.yml up -d
+
+
+
+
+
+
+
+ Distributed tracing is a method of profiling and monitoring applications. It can provide valuable insight when optimizing and debugging an application. Mainflux includes the Jaeger open tracing framework as a service with its stack by default.
+The Jaeger service will launch with the rest of the Mainflux services. All services can be launched using:
+make run
+
+The Jaeger UI can then be accessed at http://localhost:16686
from a browser. Details about the UI can be found in Jaeger's official documentation.
The Jaeger service can be disabled by using the scale
flag with docker-compose up
and setting the jaeger container to 0.
--scale jaeger=0
+
+Jaeger uses 5 ports within the Mainflux framework. These ports can be edited in the .env
file.
Variable | +Description | +Default | +
---|---|---|
MF_JAEGER_PORT | +Agent port for compact jaeger.thrift protocol | +6831 | +
MF_JAEGER_FRONTEND | +UI port | +16686 | +
MF_JAEGER_COLLECTOR | +Collector for jaeger.thrift directly from clients | +14268 | +
MF_JAEGER_CONFIGS | +Configuration server | +5778 | +
MF_JAEGER_URL | +Jaeger access from within Mainflux | +jaeger:6831 | +
Mainflux provides for tracing of messages ingested into the mainflux platform. The message metadata such as topic, sub-topic, subscriber and publisher is also included in traces. .
+The messages are tracked from end to end from the point they are published to the consumers where they are stored.
+As an example for using Jaeger, we can look at the traces generated after provisioning the system. Make sure to have ran the provisioning script that is part of the Getting Started step.
+Before getting started with Jaeger, there are a few terms that are important to define. A trace
can be thought of as one transaction within the system. A trace is made up of one or more spans
. These are the individual steps that must be taken for a trace to perform its action. A span has tags
and logs
associated with it. Tags are key-value pairs that provide information such as a database type or http method. Tags are useful when filtering traces in the Jaeger UI. Logs are structured messages used at specific points in the trace's transaction. These are typically used to indicate an error.
When first navigating to the Jaeger UI, it will present a search page with an empty results section. There are multiple fields to search from including service, operation, tags and time frames. Clicking Find Traces
will fill the results section with traces containing the selected fields.
The top of the results page includes a scatter plot of the traces and their durations. This can be very useful for finding a trace with a prolonged runtime. Clicking on one of the points will open the trace page of that trace.
+Below the graph is a list of all the traces with a summary of its information. Each trace shows a unique identifier, the overall runtime, the spans it is composed of and when it was ran. Clicking on one of the traces will open the trace page of that trace.
+ +The trace page provides a more detailed breakdown of the individual span calls. The top of the page shows a chart breaking down what spans the trace is spending its time in. Below the chart are the individual spans and their details. Expanding the spans shows any tags associated with that span and process information. This is also where any errors or logs seen while running the span will be reported.
+This is just a brief overview of the possibilities of Jaeger and its UI. For more information, check out Jaeger's official documentation.
+ + + + + + +Mainflux twins service is built on top of the Mainflux platform. In order to fully understand what follows, be sure to get acquainted with overall Mainflux architecture.
+Twin refers to a digital representation of a real world data system consisting of possibly multiple data sources/producers and/or destinations/consumers (data agents).
+For example, an industrial machine can use multiple protocols such as MQTT, OPC-UA, a regularly updated machine hosted CSV file etc. to send measurement data - such as flowrate, material temperature, etc. - and state metadata - such as engine and chassis temperature, engine rotations per seconds, identity of the current human operator, etc. - as well as to receive control, i.e. actuation messages - such as, turn on/off light, increment/decrement borer speed, change blades, etc.
+Digital twin is an abstract - and usually less detailed - digital replica of a real world system such as the industrial machine we have just described. It is used to create and store information about system's state at any given moment, to compare system state over a given period of time - so-called diffs or deltas - as well as to control agents composing the system.
+Any data producer or data consumer - which we refer to here collectively as data agent - or an interrelated system of data agents, can be represented by means of possibly multiple Mainflux things, channels and subtopics. For example, an OPC-UA server can be represented as a Mainflux thing and its nodes can be represented as multiple Mainflux channels or multiple subtopics of a single Mainflux channel. What is more, you can invert the representation: you can represent server as a channel and node as things. Mainflux platform is meant to empower you with the freedom of expression so you can make a digital representation of any data agent according to your needs.
+Although this works well, satisfies the requirements of a wide variety of use cases and corresponds to the intended use of Mainlfux IoT platform, this setup can be insufficient in two important ways. Firstly, different things, channels, and their connections - i.e. Mainflux representations of different data agent structures - are unrelated to each other, i.e. they do not form a meaningful whole and, as a consequence, they do not represent a single unified system. Secondly, the semantic aspect, i.e. the meaning of different things and channels is not transparent and defined by the sole use of Mainflux platform entities (channels and things).
+Certainly, we can try to describe things and channels connections and relations as well as their meaning - i.e. their role, position, function in the overall system - by means of their metadata. Although this might work well - with a proviso of a lot of additional effort of writing the relatively complex code to create and parse metadata - it is not a practical approach and we still don't get - at least not out of the box - a readable and useful overview of the system as a whole. Also, this approach does not enable us to answer a simple but very important question, i.e. what was the detailed state of a complete system at a certain moment in time.
+To overcome these problems, Mainflux comes with a digital twin service. The twins service is built on top of the Mainflux platform and relies on its architecture and entities, more precisely, on Mainflux users, things and channels. The primary task of the twin service is to handle Mainflux digital twins. Mainflux digital twin consists of three parts:
+Mainflux Twins service depends on the Mainflux IoT platform. The following diagram shows the place of the twins service in the overall Mainflux architecture:
+ +You use an HTTP client to communicate with the twins service. Every request sent to the twins service is authenticated by users service. Twins service handles CRUD requests and creates, retrieves, updates and deletes twins. The CRUD operations depend on the database to persist and fetch already saved twins.
+Twins service listens to the message broker server and intercepts messages passing via the message broker. Every Mainflux message contains information about subchannel and topic used to send a message. Twins service compares this info with attribute definitions of twins persisted in the database, fetches the corresponding twins and updates their respective states.
+Before we dwell into twin's anatomy, it is important to realize that in order to use Mainflux twin service, you have to provision Mainflux things and channels and you have to connect things and channels beforehand. As you go, you can modify your things, channels and connections and you can modify your digital twin to reflect these modifications, but you have to have at least a minimal setup in order to use the twin service.
+Twin's general information stores twin's owner email - owner is represented by Mainflux user -, twin's ID (unique) and name (not necessarily unique), twin's creation and update dates as well as twin's revision number. The latter refers to the sequential number of twin's definition.
+The twin's definition is meant to be a semantic representation of system's data sources and consumers (data agents). Each data data agent is represented by means of attribute. Attribute consists of data agent's name, Mainflux channel and subtopic over which it communicates. Nota bene: each attribute is uniquely defined by the combination of channel and subtopic and we cannot have two or more attributes with the same channel and subtopic in the same definition.
+Attributes have a state persistence flag that determines whether the messages communicated by its corresponding channel and subtopic trigger the creation of a new twin state. Twin states are persisted in the separate collection of the same database. Currently, twins service uses the MongoDB. InfluxDB support for twins and states persistence is on the roadmap.
+When we define our digital twin, its JSON representation might look like this:
+{
+ "owner": "john.doe@email.net",
+ "id": "a838e608-1c1b-4fea-9c34-def877473a89",
+ "name": "grinding machine 2",
+ "revision": 2,
+ "created": "2020-05-05T08:41:39.142Z",
+ "updated": "2020-05-05T08:49:12.638Z",
+ "definitions": [
+ {
+ "id": 0,
+ "created": "2020-05-05T08:41:39.142Z",
+ "attributes": [],
+ "delta": 1000000
+ },
+ {
+ "id": 1,
+ "created": "2020-05-05T08:46:23.207Z",
+ "attributes": [
+ {
+ "name": "engine temperature",
+ "channel": "7ef6c61c-f514-402f-af4b-2401b588bfec",
+ "subtopic": "engine",
+ "persist_state": true
+ },
+ {
+ "name": "chassis temperature",
+ "channel": "7ef6c61c-f514-402f-af4b-2401b588bfec",
+ "subtopic": "chassis",
+ "persist_state": true
+ },
+ {
+ "name": "rotations per sec",
+ "channel": "a254032a-8bb6-4973-a2a1-dbf80f181a86",
+ "subtopic": "",
+ "persist_state": false
+ }
+ ],
+ "delta": 1000000
+ },
+ {
+ "id": 2,
+ "created": "2020-05-05T08:49:12.638Z",
+ "attributes": [
+ {
+ "name": "engine temperature",
+ "channel": "7ef6c61c-f514-402f-af4b-2401b588bfec",
+ "subtopic": "engine",
+ "persist_state": true
+ },
+ {
+ "name": "chassis temperature",
+ "channel": "7ef6c61c-f514-402f-af4b-2401b588bfec",
+ "subtopic": "chassis",
+ "persist_state": true
+ },
+ {
+ "name": "rotations per sec",
+ "channel": "a254032a-8bb6-4973-a2a1-dbf80f181a86",
+ "subtopic": "",
+ "persist_state": false
+ },
+ {
+ "name": "precision",
+ "channel": "aed0fbca-0d1d-4b07-834c-c62f31526569",
+ "subtopic": "",
+ "persist_state": true
+ }
+ ],
+ "delta": 1000000
+ }
+ ]
+}
+
+In the case of the upper twin, we begin with an empty definition, the one with the id
0 - we could have provided the definition immediately - and over the course of time, we add two more definitions, so the total number of revisions is 2 (revision index is zero-based). We decide not to persist the number of rotation per second in our digital twin state. We define it, though, because the definition and its attributes are used not only to define states of a complex data agent system, but also to define the semantic structure of the system. delta
is the number of nanoseconds used to determine whether the received attribute value should trigger the generation of the new state or the same state should be updated. The reason for this is to enable state sampling over the regular intervals of time. Discarded values are written to the database of choice by Mainflux writers, so you can always retrieve intermediate values if need be.
states are created according to the twin's current definition. A state stores twin's ID - every state belongs to a single twin -, its own ID, twin's definition number, creation date and the actual payload. Payload is a set of key-value pairs where a key corresponds to the attribute name and a value is the actual value of the attribute. All SenML value types are supported.
+A JSON representation of a partial list of states might look like this:
+{
+ "total": 28,
+ "offset": 10,
+ "limit": 5,
+ "states": [
+ {
+ "twin_id": "a838e608-1c1b-4fea-9c34-def877473a89",
+ "id": 11,
+ "definition": 1,
+ "created": "2020-05-05T08:49:06.167Z",
+ "payload": {
+ "chassis temperature": 0.3394171011161684,
+ "engine temperature": 0.3814079472715233
+ }
+ },
+ {
+ "twin_id": "a838e608-1c1b-4fea-9c34-def877473a89",
+ "id": 12,
+ "definition": 1,
+ "created": "2020-05-05T08:49:12.168Z",
+ "payload": {
+ "chassis temperature": 1.8116442194724147,
+ "engine temperature": 0.3814079472715233
+ }
+ },
+ {
+ "twin_id": "a838e608-1c1b-4fea-9c34-def877473a89",
+ "id": 13,
+ "definition": 2,
+ "created": "2020-05-05T08:49:18.174Z",
+ "payload": {
+ "chassis temperature": 1.8116442194724147,
+ "engine temperature": 3.2410616702795814
+ }
+ },
+ {
+ "twin_id": "a838e608-1c1b-4fea-9c34-def877473a89",
+ "id": 14,
+ "definition": 2,
+ "created": "2020-05-05T08:49:19.145Z",
+ "payload": {
+ "chassis temperature": 3.2410616702795814,
+ "engine temperature": 3.2410616702795814,
+ "precision": 8.922156489392854
+ }
+ },
+ {
+ "twin_id": "a838e608-1c1b-4fea-9c34-def877473a89",
+ "id": 15,
+ "definition": 2,
+ "created": "2020-05-05T08:49:24.178Z",
+ "payload": {
+ "chassis temperature": 0.8694383878692546,
+ "engine temperature": 3.2410616702795814,
+ "precision": 8.922156489392854
+ }
+ }
+ ]
+}
+
+As you can see, the first two states correspond to the definition 1 and have only two attributes in the payload. The rest of the states is based on the definition 2, where we persist three attributes and, as a consequence, its payload consists of three entries.
+Twin belongs to a Mainflux user, tenant representing a physical person or an organization. User owns Mainflux things and channels as well as twins. Mainflux user provides authorization and authentication mechanisms to twins service. For more details, please see Authentication with Mainflux keys. In practical terms, we need to create a Mainflux user in order to create a digital twin. Every twin belongs to exactly one user. One user can have unlimited number of digital twins.
+For more information about the Twins service HTTP API please refer to the twins service OpenAPI file.
+Create and update requests use JSON body to initialize and modify, respectively, twin. You can omit every piece of data - every key-value pair - from the JSON. However, you must send at least an empty JSON body.
+{
+ "name": "twin_name",
+ "definition": {
+ "attributes": [
+ {
+ "name": "temperature",
+ "channel": "3b57b952-318e-47b5-b0d7-a14f61ecd03b",
+ "subtopic": "temperature",
+ "persist_state": true
+ },
+ {
+ "name": "humidity",
+ "channel": "3b57b952-318e-47b5-b0d7-a14f61ecd03b",
+ "subtopic": "humidity",
+ "persist_state": false
+ },
+ {
+ "name": "pressure",
+ "channel": "7ef6c61c-f514-402f-af4b-2401b588bfec",
+ "subtopic": "",
+ "persist_state": true
+ }
+ ],
+ "delta": 1
+ }
+}
+
+Create request uses POST HTTP method to create twin:
+curl -s -S -i -X POST -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" http://localhost:9018/twins -d '<twin_data>'
+
+If you do not suply the definition, the empty definition of the form
+{
+ "id": 0,
+ "created": "2020-05-05T08:41:39.142Z",
+ "attributes": [],
+ "delta": 1000000
+}
+
+will be created.
+curl -s -S -i -X PUT -H "Content-Type: application/json" -H "Authorization: Bearer <user_token>" http://localhost:9018/<twin_id> -d '<twin_data>'
+
+curl -s -S -i -X GET -H "Authorization: Bearer <user_token>" http://localhost:9018/twins/<twin_id>
+
+curl -s -S -i -X GET -H "Authorization: Bearer <user_token>" http://localhost:9018/twins
+
+List requests accepts limit
and offset
query parameters. By default, i.e. without these parameters, list requests fetches only first ten twins (or less, if there are less then ten twins).
You can fetch twins [10-29) like this:
+curl -s -S -i -X GET -H "Authorization: Bearer <user_token>" http://localhost:9018/twins?offset=10&limit=20
+
+curl -s -S -i -X DELETE -H "Authorization: Bearer <user_token>" http://localhost:9018/twins/<twin_id>
+
+curl -s -S -i -X GET -H "Authorization: Bearer <user_token>" http://localhost:9018/states/<twin_id>
+
+List requests accepts limit
and offset
query parameters. By default, i.e. without these parameters, list requests fetches only first ten states (or less, if there are less then ten states).
You can fetch states [10-29) like this:
+curl -s -S -i -X GET -H "Authorization: Bearer <user_token>" http://localhost:9018/states/<twin_id>?offset=10&limit=20
+
+Every twin and states related operation publishes notifications via the message broker. To fully understand what follows, please read about Mainflux messaging capabilities and utilities.
+In order to pick up this notifications, you have to create a Mainflux channel before you start the twins service and inform the twins service about the channel by means of the environment variable, like this:
+export MF_TWINS_CHANNEL_ID=f6894dfe-a7c9-4eef-a614-637ebeea5b4c
+
+The twins service will use this channel to publish notifications related to twins creation, update, retrieval and deletion. It will also publish notifications related to state saving into the database.
+All notifications will be published on the following message broker subject:
+channels.<mf_twins_channel_id>.<optional_subtopic>
+
+where <optional_subtopic>
is one of the following:
create.success
- on successful twin creation,create.failure
- on twin creation failure,update.success
- on successful twin update,update.failure
- on twin update failure,get.success
- on successful twin retrieval,get.failure
- on twin retrieval failure,remove.success
- on successful twin deletion,remove.failure
- on twin deletion failure,save.success
- on successful state savesave.failure
- on state save failure.Normally, you can use the default message broker, NATS, wildcards. In order to learn more about Mainflux channel topic composition, please read about subtopics. The point is to be able to subscribe to all subjects or any operation pair subject - e.g. create.success/failure - by means of one connection and read all messages or all operation related messages in the context of the same subscription.
+Since messages published on message broker are republished on any other protocol supported by Mainflux - HTTP, MQTT, CoAP and WS - you can use any supported protocol client to pick up notifications.
+ + + + + + +