Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

upserted iceberg table cannot be queried by Spark and Clickhouse #13068

Closed
cloudcarver opened this issue Oct 26, 2023 · 4 comments · Fixed by #13232
Closed

upserted iceberg table cannot be queried by Spark and Clickhouse #13068

cloudcarver opened this issue Oct 26, 2023 · 4 comments · Fixed by #13232
Labels
type/bug Something isn't working
Milestone

Comments

@cloudcarver
Copy link
Contributor

cloudcarver commented Oct 26, 2023

Describe the bug

Both Spark and Clickhouse work fine after I replaced type = 'upsert' by type = 'append-only', force_append_only = 'true'

🚿source

CREATE SOURCE drivers (
        status STRING,
        driver_id STRING,
        location_id STRING,
        location_lon double precision,
        location_lat double precision,
        timestamp timestamptz
) WITH (
        connector = 'kafka',
        topic = 'drivers_pingshan',
        {kafka_info}
) FORMAT PLAIN ENCODE JSON;

🪣sink

CREATE SINK ridesharing_stat AS (
  WITH way_events AS (
    SELECT 
      way_id, 
      drivers.driver_id AS driver_id,
      timestamp
    FROM drivers 
      JOIN node 
      ON drivers.location_id = node.id 
  ) SELECT 
      way_id, 
      count(*) AS cnt,
      window_start::timestamp as timestamp
    FROM TUMBLE(way_events, timestamp, INTERVAL '5 MINUTE')
    GROUP BY way_id, window_start
) WITH (
  connector='iceberg',
  type = 'upsert',
  primary_key = 'way_id',
  catalog.type = 'rest',
  catalog.uri = 'http://rest:8181',
  s3.endpoint = 'http://minio:9000',
  s3.access.key = 'admin',
  s3.secret.key = 'password',
  s3.region = 'us-east-1',
  table.name = 'ridesharing.stat',
  warehouse.path = 's3://warehouse/ridesharing/stat',
  database.name = 'default_catalog'
);

🌟Spark
SparkSQL cannot work with select count(*) from ridesharing.stat; and SELECT timestamp, way_id, cnt FROM ridesharing.stat LIMIT 10

[INTERNAL_ERROR] The Spark SQL phase optimization failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
org.apache.spark.SparkException: [INTERNAL_ERROR] The Spark SQL phase optimization failed with an internal error. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace.
        at org.apache.spark.SparkException$.internalError(SparkException.scala:88)
        at org.apache.spark.sql.execution.QueryExecution$.toInternalError(QueryExecution.scala:516)
        at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:528)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$1(QueryExecution.scala:202)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
        at org.apache.spark.sql.execution.QueryExecution.executePhase(QueryExecution.scala:201)
        at org.apache.spark.sql.execution.QueryExecution.optimizedPlan$lzycompute(QueryExecution.scala:139)
        at org.apache.spark.sql.execution.QueryExecution.optimizedPlan(QueryExecution.scala:135)
        at org.apache.spark.sql.execution.QueryExecution.assertOptimized(QueryExecution.scala:153)
        at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:171)
        at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:168)
        at org.apache.spark.sql.execution.QueryExecution.simpleString(QueryExecution.scala:221)
        at org.apache.spark.sql.execution.QueryExecution.org$apache$spark$sql$execution$QueryExecution$$explainString(QueryExecution.scala:266)
        at org.apache.spark.sql.execution.QueryExecution.explainString(QueryExecution.scala:235)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$6(SQLExecution.scala:112)
        at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:195)
        at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:103)
        at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:827)
        at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:65)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLDriver.run(SparkSQLDriver.scala:69)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processCmd(SparkSQLCLIDriver.scala:415)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1(SparkSQLCLIDriver.scala:533)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.$anonfun$processLine$1$adapted(SparkSQLCLIDriver.scala:527)
        at scala.collection.Iterator.foreach(Iterator.scala:943)
        at scala.collection.Iterator.foreach$(Iterator.scala:943)
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1431)
        at scala.collection.IterableLike.foreach(IterableLike.scala:74)
        at scala.collection.IterableLike.foreach$(IterableLike.scala:73)
        at scala.collection.AbstractIterable.foreach(Iterable.scala:56)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.processLine(SparkSQLCLIDriver.scala:527)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver$.main(SparkSQLCLIDriver.scala:307)
        at org.apache.spark.sql.hive.thriftserver.SparkSQLCLIDriver.main(SparkSQLCLIDriver.scala)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at org.apache.spark.deploy.JavaMainApplication.start(SparkApplication.scala:52)
        at org.apache.spark.deploy.SparkSubmit.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:1020)
        at org.apache.spark.deploy.SparkSubmit.doRunMain$1(SparkSubmit.scala:192)
        at org.apache.spark.deploy.SparkSubmit.submit(SparkSubmit.scala:215)
        at org.apache.spark.deploy.SparkSubmit.doSubmit(SparkSubmit.scala:91)
        at org.apache.spark.deploy.SparkSubmit$$anon$2.doSubmit(SparkSubmit.scala:1111)
        at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:1120)
        at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.NullPointerException
        at org.apache.iceberg.BaseDistributedDataScan.mayHaveEqualityDeletes(BaseDistributedDataScan.java:395)
        at org.apache.iceberg.BaseDistributedDataScan.doPlanFiles(BaseDistributedDataScan.java:149)
        at org.apache.iceberg.SnapshotScan.planFiles(SnapshotScan.java:139)
        at org.apache.iceberg.spark.source.SparkPartitioningAwareScan.tasks(SparkPartitioningAwareScan.java:174)
        at org.apache.iceberg.spark.source.SparkPartitioningAwareScan.taskGroups(SparkPartitioningAwareScan.java:202)
        at org.apache.iceberg.spark.source.SparkPartitioningAwareScan.outputPartitioning(SparkPartitioningAwareScan.java:104)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$$anonfun$partitioning$1.applyOrElse(V2ScanPartitioningAndOrdering.scala:44)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$$anonfun$partitioning$1.applyOrElse(V2ScanPartitioningAndOrdering.scala:42)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$1(TreeNode.scala:512)
        at org.apache.spark.sql.catalyst.trees.CurrentOrigin$.withOrigin(TreeNode.scala:104)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:512)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.trees.TreeNode.$anonfun$transformDownWithPruning$3(TreeNode.scala:517)
        at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren(TreeNode.scala:1249)
        at org.apache.spark.sql.catalyst.trees.UnaryLike.mapChildren$(TreeNode.scala:1248)
        at org.apache.spark.sql.catalyst.plans.logical.Aggregate.mapChildren(basicLogicalOperators.scala:1122)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDownWithPruning(TreeNode.scala:517)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.org$apache$spark$sql$catalyst$plans$logical$AnalysisHelper$$super$transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning(AnalysisHelper.scala:267)
        at org.apache.spark.sql.catalyst.plans.logical.AnalysisHelper.transformDownWithPruning$(AnalysisHelper.scala:263)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.plans.logical.LogicalPlan.transformDownWithPruning(LogicalPlan.scala:31)
        at org.apache.spark.sql.catalyst.trees.TreeNode.transformDown(TreeNode.scala:488)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.partitioning(V2ScanPartitioningAndOrdering.scala:42)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.$anonfun$apply$1(V2ScanPartitioningAndOrdering.scala:35)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.$anonfun$apply$3(V2ScanPartitioningAndOrdering.scala:38)
        at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
        at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
        at scala.collection.immutable.List.foldLeft(List.scala:91)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.apply(V2ScanPartitioningAndOrdering.scala:37)
        at org.apache.spark.sql.execution.datasources.v2.V2ScanPartitioningAndOrdering$.apply(V2ScanPartitioningAndOrdering.scala:33)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$2(RuleExecutor.scala:222)
        at scala.collection.LinearSeqOptimized.foldLeft(LinearSeqOptimized.scala:126)
        at scala.collection.LinearSeqOptimized.foldLeft$(LinearSeqOptimized.scala:122)
        at scala.collection.immutable.List.foldLeft(List.scala:91)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1(RuleExecutor.scala:219)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$execute$1$adapted(RuleExecutor.scala:211)
        at scala.collection.immutable.List.foreach(List.scala:431)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.execute(RuleExecutor.scala:211)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.$anonfun$executeAndTrack$1(RuleExecutor.scala:182)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker$.withTracker(QueryPlanningTracker.scala:88)
        at org.apache.spark.sql.catalyst.rules.RuleExecutor.executeAndTrack(RuleExecutor.scala:182)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$optimizedPlan$1(QueryExecution.scala:143)
        at org.apache.spark.sql.catalyst.QueryPlanningTracker.measurePhase(QueryPlanningTracker.scala:111)
        at org.apache.spark.sql.execution.QueryExecution.$anonfun$executePhase$2(QueryExecution.scala:202)
        at org.apache.spark.sql.execution.QueryExecution$.withInternalError(QueryExecution.scala:526)
        ... 41 more

🏚️ clickhouse
Surprisingly, clickhouse can work with SELECT timestamp, way_id, cnt FROM ridesharing_stat LIMIT 10, but not SELECT COUNT(*) FROM ridesharing_stat;

Received exception from server (version 23.9.2):
Code: 8. DB::Exception: Received from clickhouse:9000. DB::Exception: Not found field (cnt) in the following Arrow schema:
file_path: string not null
pos: int64 not null
: While executing ParquetBlockInputFormat: While executing Iceberg. (THERE_IS_NO_COLUMN)
(query: SELECT COUNT(*) FROM ridesharing_stat)

log in /var/log/clickhouse-server

2023.10.26 07:44:41.154303 [ 48 ] {8b8467ea-2bcb-4081-a95e-64d5879dd9ad} <Error> TCPHandler: Code: 8. DB::Exception: Not found field (cnt) in the following Arrow schema:
file_path: string not null
pos: int64 not null
: While executing ParquetBlockInputFormat: While executing Iceberg. (THERE_IS_NO_COLUMN), Stack trace (when copying this message, always include the lines below):

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c741d97 in /usr/bin/clickhouse
1. DB::Exception::Exception<String const&, String>(int, FormatStringHelperImpl<std::type_identity<String const&>::type, std::type_identity<String>::type>, String const&, String&&) @ 0x0000000007c8db87 in /usr/bin/clickhouse
2. DB::ParquetBlockInputFormat::initializeIfNeeded() @ 0x00000000134ddaf9 in /usr/bin/clickhouse
3. DB::ParquetBlockInputFormat::generate() @ 0x00000000134dff9f in /usr/bin/clickhouse
4. DB::ISource::tryGenerate() @ 0x0000000013369eb8 in /usr/bin/clickhouse
5. DB::ISource::work() @ 0x00000000133699ea in /usr/bin/clickhouse
6. DB::ExecutionThreadContext::executeTask() @ 0x00000000133818ba in /usr/bin/clickhouse
7. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000013378370 in /usr/bin/clickhouse
8. DB::PipelineExecutor::executeStep(std::atomic<bool>*) @ 0x0000000013377b28 in /usr/bin/clickhouse
9. DB::PullingPipelineExecutor::pull(DB::Chunk&) @ 0x000000001338633a in /usr/bin/clickhouse
10. DB::StorageS3Source::generate() @ 0x0000000012a64d1f in /usr/bin/clickhouse
11. DB::ISource::tryGenerate() @ 0x0000000013369eb8 in /usr/bin/clickhouse
12. DB::ISource::work() @ 0x00000000133699ea in /usr/bin/clickhouse
13. DB::ExecutionThreadContext::executeTask() @ 0x00000000133818ba in /usr/bin/clickhouse
14. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000013378370 in /usr/bin/clickhouse
15. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001337948f in /usr/bin/clickhouse
16. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c828e7f in /usr/bin/clickhouse
17. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c82c99c in /usr/bin/clickhouse
18. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c82b1c7 in /usr/bin/clickhouse
19. ? @ 0x00007fab963c9609 in ?
20. ? @ 0x00007fab962ee133 in ?

Error message/log

No response

To Reproduce

No response

Expected behavior

No response

How did you deploy RisingWave?

version: "3.8"
services:
  kafka:
    ports:
    - 9092:9092
    image: bitnami/kafka:3.2.3
    networks:
      ridesharing-net:
    environment:
    - KAFKA_CFG_PROCESS_ROLES=broker,controller
    - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
    - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=PLAINTEXT
    - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
    - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
    - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092
    - KAFKA_BROKER_ID=1
    - [email protected]:9093
    - ALLOW_PLAINTEXT_LISTENER=yes
    - KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
    - KAFKA_CFG_LOG_RETENTION_BYTES=10737418240
    - KAFKA_CFG_LOG_RETENTION_HOURS=168
    - KAFKA_CFG_DEFAULT_REPLICATION_FACTOR=1
    - KAFKA_CFG_NUM_NETWORK_THREADS=16
    - KAFKA_CFG_NUM_IO_THREADS=10
    - KAFKA_CFG_NODE_ID=1
    - KAFKA_TOPIC_CREATION_ENABLE=true

  # generate data and send to the kafka cluster
  producer:
    depends_on:
    - kafka
    build:
      context: .
      dockerfile: Dockerfile.simulator
    networks:
      ridesharing-net:
    volumes:
    - ./config:/app/config
    entrypoint:
    - sh
    - -c 
    # wait for the kafka cluster to be ready
    - sleep 10 && /app/simulator/simulator-bin -config /app/config/config.json
  
  risingwave:
    image: risingwavelabs/risingwave:v1.3.0
    networks:
      ridesharing-net:
    ports:
    - 5690:5690
    - 5691:5691
    - 4566:4566
    entrypoint:
    - /risingwave/bin/risingwave
    - playground

  # a one-time script to create tables and materialized views
  initrw:
    depends_on:
    - risingwave
    build:
      context: .
      dockerfile: Dockerfile.initrw
    networks:
      ridesharing-net:
    restart: on-failure
    environment:
    - KAFKA_BROKERS=kafka:9092
    - KAFKA_SCAN_STARTUP_MODE=earliest
    volumes:
    - ./data:/app/data
    - ./ingestion:/app/ingestion
    entrypoint:
    - bash
    - -c
    - "sleep 15 && python3 /app/ingestion/generator/upload_map.py /app/data/hdps.osm postgresql://root:@risingwave:4566/dev?sslmode=disable"

  # frontend
  frontend:
    image: node:20-alpine3.17
    networks:
      ridesharing-net:
    volumes:
    - ./web:/web
    working_dir: /web
    ports: 
    - 3000:3000
    entrypoint:
    - sh
    - -c
    - "yarn && yarn dev"
    environment:
    - NEXT_PUBLIC_DB_HOST=risingwave
    - NEXT_PUBLIC_DB_PORT=4566
    - NEXT_PUBLIC_DB_USER=root
    - NEXT_PUBLIC_DB_PASSWORD=
    - NEXT_PUBLIC_DB_DB=dev
    - NEXT_PUBLIC_DB_OPTIONS=

  clickhouse:
    image: clickhouse/clickhouse-server
    networks:
      ridesharing-net:
    environment:
      - CLICKHOUSE_ADMIN_PASSWORD=
    volumes:
      - ./clickhouse/users.xml:/etc/clickhouse-server/users.xml
    ports:
      - '8123:8123'

  rest:
    image: tabulario/iceberg-rest
    networks:
      ridesharing-net:
    container_name: iceberg-rest
    ports:
      - 8181:8181
    environment:
      - AWS_ACCESS_KEY_ID=admin
      - AWS_SECRET_ACCESS_KEY=password
      - AWS_REGION=us-east-1
      - CATALOG_WAREHOUSE=s3://warehouse/
      - CATALOG_IO__IMPL=org.apache.iceberg.aws.s3.S3FileIO
      - CATALOG_S3_ENDPOINT=http://minio:9000

  minio:
    image: minio/minio
    networks:
      ridesharing-net:
        aliases:
          - warehouse.minio
    container_name: minio
    environment:
      - MINIO_ROOT_USER=admin
      - MINIO_ROOT_PASSWORD=password
      - MINIO_DOMAIN=minio
    ports:
      - 9001:9001
      - 9000:9000
    command: ["server", "/data", "--console-address", ":9001"]

  mc:
    depends_on:
      - minio
    image: minio/mc
    networks:
      ridesharing-net:
    container_name: mc
    environment:
      - AWS_ACCESS_KEY_ID=admin
      - AWS_SECRET_ACCESS_KEY=password
      - AWS_REGION=us-east-1
    entrypoint: >
      /bin/sh -c "
      until (/usr/bin/mc config host add minio http://minio:9000 admin password) do echo '...waiting...' && sleep 1; done;
      /usr/bin/mc rm -r --force minio/warehouse;
      /usr/bin/mc mb minio/warehouse;
      /usr/bin/mc policy set public minio/warehouse;
      tail -f /dev/null
      "
  
  spark:
    image: tabulario/spark-iceberg
    container_name: spark-iceberg
    build: spark/
    networks:
      ridesharing-net:
    depends_on:
      - rest
      - minio
    volumes:
      - ./warehouse:/home/iceberg/warehouse
      - ./notebooks:/home/iceberg/notebooks/notebooks
    environment:
      - AWS_ACCESS_KEY_ID=admin
      - AWS_SECRET_ACCESS_KEY=password
      - AWS_REGION=us-east-1
    ports:
      - 8888:8888
      - 8080:8080
      - 10000:10000
      - 10001:10001

networks:
  ridesharing-net:

The version of RisingWave

dev=> select version();
                                  version                                   
----------------------------------------------------------------------------
 PostgreSQL 9.5-RisingWave-1.3.0 (c4c31bdc5e8763ae65ec23293e8c07bdfd4ab4df)
(1 row)

Additional context

No response

@cloudcarver cloudcarver added the type/bug Something isn't working label Oct 26, 2023
@github-actions github-actions bot added this to the release-1.4 milestone Oct 26, 2023
@liurenjie1024
Copy link
Contributor

@kwannoel
Copy link
Contributor

kwannoel commented Nov 2, 2023

Hi, any updates? Can we prioritize this issue?

@liurenjie1024
Copy link
Contributor

Hi, any updates? Can we prioritize this issue?

There is a pr for fixing this under review.

@liurenjie1024
Copy link
Contributor

cc @mikechesterwang Have a try next week with nightly image?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
type/bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants