Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Always writing table in to a single node. Percona postgres Citus , Sharding not working #878

Open
SHUFIL opened this issue Aug 28, 2024 · 1 comment
Labels
bug Something isn't working

Comments

@SHUFIL
Copy link

SHUFIL commented Aug 28, 2024

Report

I need to share a single table in to multiple nodes using Sharding , I have used this Doc

More about the problem

I need to share a single table in to multiple nodes using Sharding , which is not happening , it is writing only a single node always

Steps to reproduce

  1. I have tested using below command
    SET citus.explain_all_tasks TO on;
    EXPLAIN ANALYZE SELECT * FROM events;
  1. When I do analyze using analyze command , it is write only in single system , not both, below you can see that out put.
 testdb=# EXPLAIN ANALYZE SELECT * FROM events;
                                                          QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------
 Custom Scan (Citus Adaptive)  (cost=0.00..0.00 rows=100000 width=52) (actual time=38.927..38.928 rows=5 loops=1)
   Task Count: 32
   Tuple data received from nodes: 290 bytes
   Tasks Shown: One of 32
   ->  Task
         Tuple data received from node: 0 bytes
         Node: host=192.168.1.100 port=5432 dbname=testdb
         ->  Seq Scan on events_102040 events  (cost=0.00..20.20 rows=1020 width=52) (actual time=0.005..0.006 rows=0 loops=1)
             Planning Time: 0.509 ms
             Execution Time: 0.032 ms
 Planning Time: 1.453 ms
 Execution Time: 38.974 ms
(12 rows)
  1. nodes details getting correctly from both nodes
postgres=# SELECT * FROM pg_dist_node;
 nodeid | groupid |   nodename    | nodeport | noderack | hasmetadata | isactive | noderole | nodecluster | metadatasynced | shouldhaveshards
--------+---------+---------------+----------+----------+-------------+----------+----------+-------------+----------------+------------------
     18 |      17 | 192.168.1.100 |     5432 | default  | t           | t        | primary  | default     | t              | t
      1 |       0 | 192.168.1.101 |     5432 | default  | t           | t        | primary  | default     | t              | f
     20 |      19 | 192.168.1.150 |     5435 | default  | t           | t        | primary  | default     | t              | t
(3 rows)

Worker nodes list from both server.

postgres=# SELECT * FROM citus_get_active_worker_nodes();
   node_name   | node_port
---------------+-----------
 192.168.1.150 |      5435
 192.168.1.100 |      5432
(2 rows)

Versions

Database Postgres percona cluster

postgres=# SELECT VERSION();
                                                          version
------------------------------------------------------------------------------------------------------------------------------
PostgreSQL 15.7 - Percona Distribution on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
(1 row)

citus_version

postgres=# select citus_version();
                                           citus_version
----------------------------------------------------------------------------------------------------
 Citus 12.0.1 on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, 64-bit
(1 row)

Anything else?

No response

@SHUFIL SHUFIL added the bug Something isn't working label Aug 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants
@SHUFIL and others