Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to complete an installation of 24.5.X or 24.6.0 while a migration from 24.4.2 to 24.5.0 works. #6049

Open
plemelin opened this issue Jun 19, 2024 · 10 comments

Comments

@plemelin
Copy link

Self-Hosted Version

24.4.2

CPU Architecture

x86_64

Docker Version

26.1.4

Docker Compose Version

2.27.1

Steps to Reproduce

git clone https://github.com/getsentry/onpremise.git
cd onpremise
# Use same configuration as 24.4.2
./install.sh --skip-user-creation --no-report-self-hosted-issues

Expected Result

I get to the point where I can do a docker compose up -d

Actual Result

The install seems to fail at the topic creation. The following lines are repeated for each topic

2024-06-19 01:48:38,586 Initializing Snuba...
2024-06-19 01:48:39,930 Snuba initialization took 1.3452168660005555s
2024-06-19 01:48:40,228 Initializing Snuba...
2024-06-19 01:48:41,540 Snuba initialization took 1.311952663003467s
2024-06-19 01:48:41,543 Attempting to connect to Kafka (attempt 0)...
2024-06-19 01:48:41,562 Connected to Kafka on attempt 0
2024-06-19 01:48:41,597 Creating Kafka topics...
2024-06-19 01:48:42,598 Failed to create topic events
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/utils/manage_topics.py", line 31, in create_topics
    future.result()
  File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 456, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
  File "/usr/local/lib/python3.11/site-packages/confluent_kafka/admin/__init__.py", line 131, in _make_topics_result
    result = f.result()
             ^^^^^^^^^^
  File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 449, in result
    return self.__get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/concurrent/futures/_base.py", line 401, in __get_result
    raise self._exception
cimpl.KafkaException: KafkaError{code=_TIMED_OUT,val=-185,str="Failed while waiting for response from broker: Local: Timed out"}

* SNIP *

2024-06-19 02:41:06,028 Failed to create topic shared-resources-usage

* SNIP *

Then we get the crash

 Container sentry-self-hosted-clickhouse-1  Running
 Container sentry-self-hosted-redis-1  Running
 Container sentry-self-hosted-zookeeper-1  Running
 Container sentry-self-hosted-kafka-1  Running
 Container sentry-self-hosted-zookeeper-1  Waiting
 Container sentry-self-hosted-zookeeper-1  Healthy
2024-06-19 02:41:08,751 Initializing Snuba...
2024-06-19 02:41:10,214 Snuba initialization took 1.4635010110214353s
2024-06-19 02:41:10,540 Initializing Snuba...
2024-06-19 02:41:12,302 Snuba initialization took 1.7629065410001203s
{"module": "snuba.migrations.connect", "event": "Snuba has only been tested on Clickhouse versions up to 23.3.19.33 (clickhouse:9000 - 23.8.11.29). Higher versions might not be supported.", "severity": "warning", "timestamp": "2024-06-19T02:41:12.315825Z"}
{"module": "snuba.migrations.runner", "event": "Running migration: 0001_migrations", "severity": "info", "timestamp": "2024-06-19T02:41:12.338930Z"}
{"module": "snuba.migrations.operations", "event": "Executing op: CREATE TABLE IF NOT EXISTS migra...", "severity": "info", "timestamp": "2024-06-19T02:41:12.339060Z"}
{"module": "snuba.migrations.operations", "event": "Executing on local node: clickhouse:9000", "severity": "info", "timestamp": "2024-06-19T02:41:12.339116Z"}
{"module": "snuba.migrations.runner", "event": "Finished: 0001_migrations", "severity": "info", "timestamp": "2024-06-19T02:41:12.357776Z"}
{"module": "snuba.migrations.runner", "event": "Running migration: 0007_groupedmessages", "severity": "info", "timestamp": "2024-06-19T02:41:12.387322Z"}
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/clickhouse/native.py", line 200, in execute
    result_data = query_execute()
                  ^^^^^^^^^^^^^^^
  File "/usr/src/snuba/snuba/clickhouse/native.py", line 183, in query_execute
    return conn.execute(  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 373, in execute
    rv = self.process_ordinary_query(
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 571, in process_ordinary_query
    return self.receive_result(with_column_types=with_column_types,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 204, in receive_result
    return result.get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/result.py", line 50, in get_result
    for packet in self.packet_generator:
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 220, in packet_generator
    packet = self.receive_packet()
             ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 237, in receive_packet
    raise packet.exception
clickhouse_driver.errors.ServerException: Code: 74.
DB::ErrnoException. DB::ErrnoException: Cannot read from file 78, errno: 1, strerror: Operation not permitted: While executing MergeTreeInOrder. Stack trace:

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c61ff37 in /usr/bin/clickhouse
1. DB::ErrnoException::ErrnoException(String const&, int, int, std::optional<String> const&) @ 0x000000000c621494 in /usr/bin/clickhouse
2. DB::ThreadPoolReader::submit(DB::IAsynchronousReader::Request) @ 0x0000000010c30ab6 in /usr/bin/clickhouse
3. DB::AsynchronousReadBufferFromFileDescriptor::asyncReadInto(char*, unsigned long, Priority) @ 0x0000000010c27a67 in /usr/bin/clickhouse
4. DB::AsynchronousReadBufferFromFileDescriptor::nextImpl() @ 0x0000000010c28020 in /usr/bin/clickhouse
5. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&, bool) @ 0x0000000011293d7b in /usr/bin/clickhouse
6. DB::CompressedReadBufferFromFile::nextImpl() @ 0x0000000011295c85 in /usr/bin/clickhouse
7. DB::MergeTreeMarksLoader::loadMarksImpl() @ 0x0000000012d680db in /usr/bin/clickhouse
8. DB::MergeTreeMarksLoader::loadMarks() @ 0x0000000012d672fd in /usr/bin/clickhouse
9. DB::MergeTreeMarksLoader::getMark(unsigned long, unsigned long) @ 0x0000000012d66058 in /usr/bin/clickhouse
10. DB::MergeTreeReaderCompact::initialize() @ 0x0000000012d83abf in /usr/bin/clickhouse
11. DB::MergeTreeReaderCompact::readRows(unsigned long, unsigned long, bool, unsigned long, std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x0000000012d8665c in /usr/bin/clickhouse
12. DB::MergeTreeRangeReader::DelayedStream::finalize(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x0000000012dacd22 in /usr/bin/clickhouse
13. DB::MergeTreeRangeReader::read(unsigned long, DB::MarkRanges&) @ 0x0000000012db55c0 in /usr/bin/clickhouse
14. DB::IMergeTreeSelectAlgorithm::readFromPartImpl() @ 0x0000000012da97c8 in /usr/bin/clickhouse
15. DB::IMergeTreeSelectAlgorithm::readFromPart() @ 0x0000000012daa4fa in /usr/bin/clickhouse
16. DB::IMergeTreeSelectAlgorithm::read() @ 0x0000000012da76f5 in /usr/bin/clickhouse
17. DB::MergeTreeSource::tryGenerate() @ 0x00000000135dbd0f in /usr/bin/clickhouse
18. DB::ISource::work() @ 0x0000000013185c06 in /usr/bin/clickhouse
19. DB::ExecutionThreadContext::executeTask() @ 0x000000001319d61a in /usr/bin/clickhouse
20. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000013194410 in /usr/bin/clickhouse
21. DB::PipelineExecutor::execute(unsigned long, bool) @ 0x0000000013193742 in /usr/bin/clickhouse
22. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000131a0faf in /usr/bin/clickhouse
23. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c700e24 in /usr/bin/clickhouse
24. ? @ 0x0000793329c50609 in ?
25. ? @ 0x0000793329b75353 in ?


The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/usr/local/bin/snuba", line 33, in <module>
    sys.exit(load_entry_point('snuba', 'console_scripts', 'snuba')())
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1130, in __call__
    return self.main(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1055, in main
    rv = self.invoke(ctx)
         ^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1657, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
                           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/click/core.py", line 1404, in invoke
    return ctx.invoke(self.callback, **ctx.params)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/click/core.py", line 760, in invoke
    return __callback(*args, **kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/snuba/snuba/cli/migrations.py", line 114, in migrate
    runner.run_all(
  File "/usr/src/snuba/snuba/migrations/runner.py", line 219, in run_all
    self._run_migration_impl(
  File "/usr/src/snuba/snuba/migrations/runner.py", line 297, in _run_migration_impl
    migration.forwards(context, dry_run, columns_states)
  File "/usr/src/snuba/snuba/migrations/migration.py", line 167, in forwards
    update_status(Status.IN_PROGRESS)
  File "/usr/src/snuba/snuba/migrations/runner.py", line 449, in _update_migration_status
    next_version = self._get_next_version(migration_key)
                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/snuba/snuba/migrations/runner.py", line 464, in _get_next_version
    result = self.__connection.execute(
             ^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/src/snuba/snuba/clickhouse/native.py", line 283, in execute
    raise ClickhouseError(e.message, code=e.code) from e
snuba.clickhouse.errors.ClickhouseError: DB::ErrnoException. DB::ErrnoException: Cannot read from file 78, errno: 1, strerror: Operation not permitted: While executing MergeTreeInOrder. Stack trace:

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c61ff37 in /usr/bin/clickhouse
1. DB::ErrnoException::ErrnoException(String const&, int, int, std::optional<String> const&) @ 0x000000000c621494 in /usr/bin/clickhouse
2. DB::ThreadPoolReader::submit(DB::IAsynchronousReader::Request) @ 0x0000000010c30ab6 in /usr/bin/clickhouse
3. DB::AsynchronousReadBufferFromFileDescriptor::asyncReadInto(char*, unsigned long, Priority) @ 0x0000000010c27a67 in /usr/bin/clickhouse
4. DB::AsynchronousReadBufferFromFileDescriptor::nextImpl() @ 0x0000000010c28020 in /usr/bin/clickhouse
5. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&, bool) @ 0x0000000011293d7b in /usr/bin/clickhouse
6. DB::CompressedReadBufferFromFile::nextImpl() @ 0x0000000011295c85 in /usr/bin/clickhouse
7. DB::MergeTreeMarksLoader::loadMarksImpl() @ 0x0000000012d680db in /usr/bin/clickhouse
8. DB::MergeTreeMarksLoader::loadMarks() @ 0x0000000012d672fd in /usr/bin/clickhouse
9. DB::MergeTreeMarksLoader::getMark(unsigned long, unsigned long) @ 0x0000000012d66058 in /usr/bin/clickhouse
10. DB::MergeTreeReaderCompact::initialize() @ 0x0000000012d83abf in /usr/bin/clickhouse
11. DB::MergeTreeReaderCompact::readRows(unsigned long, unsigned long, bool, unsigned long, std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x0000000012d8665c in /usr/bin/clickhouse
12. DB::MergeTreeRangeReader::DelayedStream::finalize(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x0000000012dacd22 in /usr/bin/clickhouse
13. DB::MergeTreeRangeReader::read(unsigned long, DB::MarkRanges&) @ 0x0000000012db55c0 in /usr/bin/clickhouse
14. DB::IMergeTreeSelectAlgorithm::readFromPartImpl() @ 0x0000000012da97c8 in /usr/bin/clickhouse
15. DB::IMergeTreeSelectAlgorithm::readFromPart() @ 0x0000000012daa4fa in /usr/bin/clickhouse
16. DB::IMergeTreeSelectAlgorithm::read() @ 0x0000000012da76f5 in /usr/bin/clickhouse
17. DB::MergeTreeSource::tryGenerate() @ 0x00000000135dbd0f in /usr/bin/clickhouse
18. DB::ISource::work() @ 0x0000000013185c06 in /usr/bin/clickhouse
19. DB::ExecutionThreadContext::executeTask() @ 0x000000001319d61a in /usr/bin/clickhouse
20. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000013194410 in /usr/bin/clickhouse
21. DB::PipelineExecutor::execute(unsigned long, bool) @ 0x0000000013193742 in /usr/bin/clickhouse
22. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<true>::ThreadFromGlobalPoolImpl<DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0>(DB::PullingAsyncPipelineExecutor::pull(DB::Chunk&, unsigned long)::$_0&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x00000000131a0faf in /usr/bin/clickhouse
23. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c700e24 in /usr/bin/clickhouse
24. ? @ 0x0000793329c50609 in ?
25. ? @ 0x0000793329b75353 in ?

Error in install/bootstrap-snuba.sh:4.
'$dcr snuba-api migrations migrate --force' exited with status 1
-> ./install.sh:main:36
--> install/bootstrap-snuba.sh:source:4

Cleaning up...

My production instance that is updated from 24.4.2 to 24.5.0 has migrated without issue.
This issue arise on the fresh install test I run on the CI.
I test fresh installs of 24.5.0, 24.5.1 and 24.6.0. All fail the same way now.

Looking for help in identifying the next steps to help me figure out this issue.

Event ID

No response

@azaslavsky azaslavsky transferred this issue from getsentry/self-hosted Jun 21, 2024
@azaslavsky
Copy link

@getsentry/owners-snuba Did we add any new migrations after 24.5.0 was released?

@evanh
Copy link
Member

evanh commented Jun 24, 2024

We did add a number of migrations, but I don't think that would be the problem. We have been upgrading Clickhouse versions, and from what I can find, that seems a more likely culprit.

@plemelin Can you tell us which version of Clickhouse is running?

If you look at the docker container for Clickhouse it should have the version number in it (e.g. ghcr.io/getsentry/image-mirror-altinity-clickhouse-server:22.8.15.25.altinitystable) or you can run docker exec -it <name of clickhouse container> clickhouse-client and it will print it out as well.

@plemelin
Copy link
Author

@evanh, thank you for helping me here.

Here are the clickhouse versions:

  • On sentry 24.4.2:
    sh-4.2# docker exec -it sentry-self-hosted-clickhouse-1 clickhouse-client
    ClickHouse client version 21.8.13.1.altinitystable (altinity build).
    Connecting to localhost:9000 as user default.
    Connected to ClickHouse server version 21.8.13 revision 54449.
    
  • On sentry 24.5.1, I have to manually start it since the installation fails and it's not started:
    sh-4.2# docker compose run clickhouse clickhouse-client
    ClickHouse client version 23.8.11.29.altinitystable (altinity build).
    Connecting to localhost:9000 as user default.
    Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)
    
  • On sentry 24.6.0, I have to manually start it since the installation fails and it's not started:
    sh-4.2# docker compose run clickhouse clickhouse-client
    ClickHouse client version 23.8.11.29.altinitystable (altinity build).
    Connecting to localhost:9000 as user default.
    Code: 210. DB::NetException: Connection refused (localhost:9000). (NETWORK_ERROR)
    

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Jun 25, 2024
@onewland
Copy link
Contributor

Hi @plemelin,

Can you share the output of docker compose logs clickhouse? It sounds like your server isn't starting.

@plemelin
Copy link
Author

plemelin commented Jun 25, 2024

docker compose logs clickhouse

sh-4.2# docker compose logs clickhouse
clickhouse-1  | ClickHouse Database directory appears to contain a database; Skipping initialization
clickhouse-1  | Processing configuration file '/etc/clickhouse-server/config.xml'.
clickhouse-1  | Merging configuration file '/etc/clickhouse-server/config.d/docker_related_config.xml'.
clickhouse-1  | Merging configuration file '/etc/clickhouse-server/config.d/sentry.xml'.
clickhouse-1  | Logging warning to /var/log/clickhouse-server/clickhouse-server.log
clickhouse-1  | Logging errors to /var/log/clickhouse-server/clickhouse-server.err.log
clickhouse-1  | 2024.06.25 17:42:57.864242 [ 1 ] {} <Warning> Context: Linux transparent hugepages are set to "always". Check /sys/kernel/mm/transparent_hugepage/enabled
clickhouse-1  | 2024.06.25 17:42:57.875006 [ 1 ] {} <Warning> AsynchronousMetrics: Thermal monitor '6' exists but could not be read: errno: 61, strerror: No data available.
clickhouse-1  | 2024.06.25 17:42:57.903651 [ 1 ] {} <Warning> AsynchronousMetrics: Hardware monitor 'iwlwifi_1', sensor '1' exists but could not be read: errno: 61, strerror: No data available.
clickhouse-1  | 2024.06.25 17:42:58.112570 [ 1 ] {} <Warning> Application: Listen [::]:9009 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 23.8.11.29.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse-1  | 2024.06.25 17:42:58.116400 [ 1 ] {} <Warning> Access(local_directory): File /var/lib/clickhouse/access/users.list doesn't exist
clickhouse-1  | 2024.06.25 17:42:58.116426 [ 1 ] {} <Warning> Access(local_directory): Recovering lists in directory /var/lib/clickhouse/access/
clickhouse-1  | 2024.06.25 17:42:58.147351 [ 1 ] {} <Warning> Application: Listen [::]:8123 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 23.8.11.29.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse-1  | 2024.06.25 17:42:58.147489 [ 1 ] {} <Warning> Application: Listen [::]:9000 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 23.8.11.29.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse-1  | 2024.06.25 17:42:58.147580 [ 1 ] {} <Warning> Application: Listen [::]:9004 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 23.8.11.29.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse-1  | 2024.06.25 17:42:58.147664 [ 1 ] {} <Warning> Application: Listen [::]:9005 failed: Poco::Exception. Code: 1000, e.code() = 0, DNS error: EAI: Address family for hostname not supported (version 23.8.11.29.altinitystable (altinity build)). If it is an IPv6 or IPv4 address and your host has disabled IPv6 or IPv4, then consider to specify not disabled IPv4 or IPv6 address to listen in <listen_host> element of configuration file. Example for disabled IPv6: <listen_host>0.0.0.0</listen_host> . Example for disabled IPv4: <listen_host>::</listen_host>
clickhouse-1  | 2024.06.25 17:43:51.213515 [ 48 ] {0342558c-eada-494e-88c1-2d61f6549a89} <Error> executeQuery: Code: 60. DB::Exception: Table default.migrations_local does not exist. (UNKNOWN_TABLE) (version 23.8.11.29.altinitystable (altinity build)) (from 172.18.0.6:39306) (in query: SELECT group, migration_id, status FROM migrations_local FINAL WHERE group IN ('system', 'events', 'transactions', 'discover', 'outcomes', 'metrics', 'sessions', 'profiles', 'functions', 'replays', 'generic_metrics', 'search_issues', 'spans', 'group_attributes')), Stack trace (when copying this message, always include the lines below):
clickhouse-1  | 
clickhouse-1  | 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c61ff37 in /usr/bin/clickhouse
clickhouse-1  | 1. DB::Exception::Exception<String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&) @ 0x0000000007132547 in /usr/bin/clickhouse
clickhouse-1  | 2. DB::IDatabase::getTable(String const&, std::shared_ptr<DB::Context const>) const @ 0x00000000114add99 in /usr/bin/clickhouse
clickhouse-1  | 3. DB::DatabaseCatalog::getTableImpl(DB::StorageID const&, std::shared_ptr<DB::Context const>, std::optional<DB::Exception>*) const @ 0x00000000116dcf68 in /usr/bin/clickhouse
clickhouse-1  | 4. DB::DatabaseCatalog::getTable(DB::StorageID const&, std::shared_ptr<DB::Context const>) const @ 0x00000000116e6709 in /usr/bin/clickhouse
clickhouse-1  | 5. DB::JoinedTables::getLeftTableStorage() @ 0x0000000011fb8c34 in /usr/bin/clickhouse
clickhouse-1  | 6. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x0000000011ec18c3 in /usr/bin/clickhouse
clickhouse-1  | 7. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x0000000011f74948 in /usr/bin/clickhouse
clickhouse-1  | 8. DB::InterpreterFactory::get(std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x0000000011e7b91e in /usr/bin/clickhouse
clickhouse-1  | 9. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000122bf58a in /usr/bin/clickhouse
clickhouse-1  | 10. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000122bb5b5 in /usr/bin/clickhouse
clickhouse-1  | 11. DB::TCPHandler::runImpl() @ 0x0000000013137519 in /usr/bin/clickhouse
clickhouse-1  | 12. DB::TCPHandler::run() @ 0x00000000131498f9 in /usr/bin/clickhouse
clickhouse-1  | 13. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse
clickhouse-1  | 14. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse
clickhouse-1  | 15. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse
clickhouse-1  | 16. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse
clickhouse-1  | 17. ? @ 0x00007f940768c609 in ?
clickhouse-1  | 18. ? @ 0x00007f94075b1353 in ?
clickhouse-1  | 
clickhouse-1  | 2024.06.25 17:43:51.213613 [ 48 ] {0342558c-eada-494e-88c1-2d61f6549a89} <Error> TCPHandler: Code: 60. DB::Exception: Table default.migrations_local does not exist. (UNKNOWN_TABLE), Stack trace (when copying this message, always include the lines below):
clickhouse-1  | 
clickhouse-1  | 0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c61ff37 in /usr/bin/clickhouse
clickhouse-1  | 1. DB::Exception::Exception<String, String>(int, FormatStringHelperImpl<std::type_identity<String>::type, std::type_identity<String>::type>, String&&, String&&) @ 0x0000000007132547 in /usr/bin/clickhouse
clickhouse-1  | 2. DB::IDatabase::getTable(String const&, std::shared_ptr<DB::Context const>) const @ 0x00000000114add99 in /usr/bin/clickhouse
clickhouse-1  | 3. DB::DatabaseCatalog::getTableImpl(DB::StorageID const&, std::shared_ptr<DB::Context const>, std::optional<DB::Exception>*) const @ 0x00000000116dcf68 in /usr/bin/clickhouse
clickhouse-1  | 4. DB::DatabaseCatalog::getTable(DB::StorageID const&, std::shared_ptr<DB::Context const>) const @ 0x00000000116e6709 in /usr/bin/clickhouse
clickhouse-1  | 5. DB::JoinedTables::getLeftTableStorage() @ 0x0000000011fb8c34 in /usr/bin/clickhouse
clickhouse-1  | 6. DB::InterpreterSelectQuery::InterpreterSelectQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context> const&, std::optional<DB::Pipe>, std::shared_ptr<DB::IStorage> const&, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&, std::shared_ptr<DB::StorageInMemoryMetadata const> const&, std::shared_ptr<DB::PreparedSets>) @ 0x0000000011ec18c3 in /usr/bin/clickhouse
clickhouse-1  | 7. DB::InterpreterSelectWithUnionQuery::InterpreterSelectWithUnionQuery(std::shared_ptr<DB::IAST> const&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&, std::vector<String, std::allocator<String>> const&) @ 0x0000000011f74948 in /usr/bin/clickhouse
clickhouse-1  | 8. DB::InterpreterFactory::get(std::shared_ptr<DB::IAST>&, std::shared_ptr<DB::Context>, DB::SelectQueryOptions const&) @ 0x0000000011e7b91e in /usr/bin/clickhouse
clickhouse-1  | 9. DB::executeQueryImpl(char const*, char const*, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum, DB::ReadBuffer*) @ 0x00000000122bf58a in /usr/bin/clickhouse
clickhouse-1  | 10. DB::executeQuery(String const&, std::shared_ptr<DB::Context>, bool, DB::QueryProcessingStage::Enum) @ 0x00000000122bb5b5 in /usr/bin/clickhouse
clickhouse-1  | 11. DB::TCPHandler::runImpl() @ 0x0000000013137519 in /usr/bin/clickhouse
clickhouse-1  | 12. DB::TCPHandler::run() @ 0x00000000131498f9 in /usr/bin/clickhouse
clickhouse-1  | 13. Poco::Net::TCPServerConnection::start() @ 0x0000000015b42834 in /usr/bin/clickhouse
clickhouse-1  | 14. Poco::Net::TCPServerDispatcher::run() @ 0x0000000015b43a31 in /usr/bin/clickhouse
clickhouse-1  | 15. Poco::PooledThread::run() @ 0x0000000015c7a667 in /usr/bin/clickhouse
clickhouse-1  | 16. Poco::ThreadImpl::runnableEntry(void*) @ 0x0000000015c7893c in /usr/bin/clickhouse
clickhouse-1  | 17. ? @ 0x00007f940768c609 in ?
clickhouse-1  | 18. ? @ 0x00007f94075b1353 in ?
clickhouse-1  | 
clickhouse-1  | 2024.06.25 17:43:51.214854 [ 48 ] {78a1b797-94f9-412d-86d8-312a47a7636c} <Error> executeQuery: Code: 60. DB::Exception: Table default.migrations_local does not exist. (UNKNOWN_TABLE) (version 23.8.11.29.altinitystable (altinity build)) (from 172.18.0.6:39318) (in query: SELECT group, migration_id, status FROM migrations_local FINAL WHERE group IN ('system', 'events', 'transactions', 'discover', 'outcomes', 'metrics', 'sessions', 'profiles', 'functions', 'replays', 'generic_metrics', 'search_issues', 'spans', 'group_attributes')), Stack trace (when copying this message, always include the lines below):
clickhouse-1  | 

* SNIP *

This is followed by multiple exception for subsequent queries.

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Jun 25, 2024
@onewland
Copy link
Contributor

Try running

snuba migrations migrate --group system --force

from your snuba container, and see if the problem persists

@plemelin
Copy link
Author

Try running

snuba migrations migrate --group system --force

from your snuba container, and see if the problem persists

When the install.sh command fails, no containers are up. I did not run docker compose up -d yet.
Should I run docker compose up -d and then force the migration?

This is a fresh install where the install.sh command fails

@getsantry getsantry bot moved this to Waiting for: Product Owner in GitHub Issues with 👀 3 Jun 26, 2024
@plemelin
Copy link
Author

So I did the following:

docker compose up -d
docker exec snuba-api /bin/bash
snuba migrations migrate --group system --force

And here is the result

snuba@dc70364ddf81:/usr/src/snuba$ snuba migrations migrate --group system --force
2024-06-26 13:30:43,093 Initializing Snuba...
2024-06-26 13:30:48,487 Snuba initialization took 5.396030636002251s
{"module": "snuba.migrations.connect", "event": "Snuba has only been tested on Clickhouse versions up to 23.3.19.33 (clickhouse:9000 - 23.8.11.29). Higher versions might not be supported.", "severity": "warning", "timestamp": "2024-06-26T13:30:48.518849Z"}
Traceback (most recent call last):
  File "/usr/src/snuba/snuba/clickhouse/native.py", line 200, in execute
    result_data = query_execute()
                  ^^^^^^^^^^^^^^^
  File "/usr/src/snuba/snuba/clickhouse/native.py", line 183, in query_execute
    return conn.execute(  # type: ignore
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 373, in execute
    rv = self.process_ordinary_query(
         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 571, in process_ordinary_query
    return self.receive_result(with_column_types=with_column_types,
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 204, in receive_result
    return result.get_result()
           ^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/result.py", line 50, in get_result
    for packet in self.packet_generator:
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 220, in packet_generator
    packet = self.receive_packet()
             ^^^^^^^^^^^^^^^^^^^^^
  File "/usr/local/lib/python3.11/site-packages/clickhouse_driver/client.py", line 237, in receive_packet
    raise packet.exception
clickhouse_driver.errors.ServerException: Code: 74.
DB::ErrnoException. DB::ErrnoException: Cannot read from file 81, errno: 1, strerror: Operation not permitted: While executing MergeTreeInOrder. Stack trace:

0. DB::Exception::Exception(DB::Exception::MessageMasked&&, int, bool) @ 0x000000000c61ff37 in /usr/bin/clickhouse
1. DB::ErrnoException::ErrnoException(String const&, int, int, std::optional<String> const&) @ 0x000000000c621494 in /usr/bin/clickhouse
2. DB::ThreadPoolReader::submit(DB::IAsynchronousReader::Request) @ 0x0000000010c30ab6 in /usr/bin/clickhouse
3. DB::AsynchronousReadBufferFromFileDescriptor::asyncReadInto(char*, unsigned long, Priority) @ 0x0000000010c27a67 in /usr/bin/clickhouse
4. DB::AsynchronousReadBufferFromFileDescriptor::nextImpl() @ 0x0000000010c28020 in /usr/bin/clickhouse
5. DB::CompressedReadBufferBase::readCompressedData(unsigned long&, unsigned long&, bool) @ 0x0000000011293d7b in /usr/bin/clickhouse
6. DB::CompressedReadBufferFromFile::nextImpl() @ 0x0000000011295c85 in /usr/bin/clickhouse
7. DB::MergeTreeMarksLoader::loadMarksImpl() @ 0x0000000012d680db in /usr/bin/clickhouse
8. DB::MergeTreeMarksLoader::loadMarks() @ 0x0000000012d672fd in /usr/bin/clickhouse
9. DB::MergeTreeMarksLoader::getMark(unsigned long, unsigned long) @ 0x0000000012d66058 in /usr/bin/clickhouse
10. DB::MergeTreeReaderCompact::initialize() @ 0x0000000012d83abf in /usr/bin/clickhouse
11. DB::MergeTreeReaderCompact::readRows(unsigned long, unsigned long, bool, unsigned long, std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x0000000012d8665c in /usr/bin/clickhouse
12. DB::MergeTreeRangeReader::DelayedStream::finalize(std::vector<COW<DB::IColumn>::immutable_ptr<DB::IColumn>, std::allocator<COW<DB::IColumn>::immutable_ptr<DB::IColumn>>>&) @ 0x0000000012dacd22 in /usr/bin/clickhouse
13. DB::MergeTreeRangeReader::read(unsigned long, DB::MarkRanges&) @ 0x0000000012db55c0 in /usr/bin/clickhouse
14. DB::IMergeTreeSelectAlgorithm::readFromPartImpl() @ 0x0000000012da97c8 in /usr/bin/clickhouse
15. DB::IMergeTreeSelectAlgorithm::readFromPart() @ 0x0000000012daa4fa in /usr/bin/clickhouse
16. DB::IMergeTreeSelectAlgorithm::read() @ 0x0000000012da76f5 in /usr/bin/clickhouse
17. DB::MergeTreeSource::tryGenerate() @ 0x00000000135dbd0f in /usr/bin/clickhouse
18. DB::ISource::work() @ 0x0000000013185c06 in /usr/bin/clickhouse
19. DB::ExecutionThreadContext::executeTask() @ 0x000000001319d61a in /usr/bin/clickhouse
20. DB::PipelineExecutor::executeStepImpl(unsigned long, std::atomic<bool>*) @ 0x0000000013194410 in /usr/bin/clickhouse
21. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<DB::PipelineExecutor::spawnThreads()::$_0, void ()>>(std::__function::__policy_storage const*) @ 0x000000001319554f in /usr/bin/clickhouse
22. ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::worker(std::__list_iterator<ThreadFromGlobalPoolImpl<false>, void*>) @ 0x000000000c6feb1e in /usr/bin/clickhouse
23. void std::__function::__policy_invoker<void ()>::__call_impl<std::__function::__default_alloc_func<ThreadFromGlobalPoolImpl<false>::ThreadFromGlobalPoolImpl<void ThreadPoolImpl<ThreadFromGlobalPoolImpl<false>>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>(void&&)::'lambda'(), void ()>>(std::__function::__policy_storage const*) @ 0x000000000c7025dc in /usr/bin/clickhouse
24. void* std::__thread_proxy[abi:v15000]<std::tuple<std::unique_ptr<std::__thread_struct, std::default_delete<std::__thread_struct>>, void ThreadPoolImpl<std::thread>::scheduleImpl<void>(std::function<void ()>, Priority, std::optional<unsigned long>, bool)::'lambda0'()>>(void*) @ 0x000000000c700e24 in /usr/bin/clickhouse
25. ? @ 0x00007594bd84b609 in ?
26. ? @ 0x00007594bd770353 in ?

@plemelin
Copy link
Author

We did add a number of migrations, but I don't think that would be the problem. We have been upgrading Clickhouse versions, and from what I can find, that seems a more likely culprit.

@plemelin Can you tell us which version of Clickhouse is running?

If you look at the docker container for Clickhouse it should have the version number in it (e.g. ghcr.io/getsentry/image-mirror-altinity-clickhouse-server:22.8.15.25.altinitystable) or you can run docker exec -it <name of clickhouse container> clickhouse-client and it will print it out as well.

@evanh you seem to have found the culprit. I tried the <local_filesystem_read_method>pread</local_filesystem_read_method> configuration earlier without success but did not understand correctly this was to be set in the users.xml file and modified the clickhouse/config.xml. I revisited this today.

As I'm not 100% how to manage the users include to modify the profiles' default settings yet, i just copied over the current users.xml inside clickhouse/users.xml and mounted it like so:

diff --git a/docker-compose.yml b/docker-compose.yml
index f88d74d..b0d34df 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -229,6 +229,10 @@ services:
         read_only: true
         source: ./clickhouse/config.xml
         target: /etc/clickhouse-server/config.d/sentry.xml
+      - type: bind
+        read_only: true
+        source: ./clickhouse/users.xml
+        target: /etc/clickhouse-server/users.xml
     environment:
       # This limits Clickhouse's memory to 30% of the host memory
       # If you have high volume and your search return incomplete results

Seems like the install is working now on my 24.6.0 test.

Now, what would the proper way of addressing this?


My current patch:

diff --git a/clickhouse/users.xml b/clickhouse/users.xml
new file mode 100644
index 0000000..3d4d0d7
--- /dev/null
+++ b/clickhouse/users.xml
@@ -0,0 +1,112 @@
+<clickhouse>
+    <!-- See also the files in users.d directory where the settings can be overridden. -->
+
+    <!-- Profiles of settings. -->
+    <profiles>
+        <!-- Default settings. -->
+        <default>
+            <local_filesystem_read_method>pread</local_filesystem_read_method>
+        </default>
+
+        <!-- Profile that allows only read queries. -->
+        <readonly>
+            <readonly>1</readonly>
+        </readonly>
+    </profiles>
+
+    <!-- Users and ACL. -->
+    <users>
+        <!-- If user name was not specified, 'default' user is used. -->
+        <default>
+            <!-- See also the files in users.d directory where the password can be overridden.
+
+                 Password could be specified in plaintext or in SHA256 (in hex format).
+
+                 If you want to specify password in plaintext (not recommended), place it in 'password' element.
+                 Example: <password>qwerty</password>.
+                 Password could be empty.
+
+                 If you want to specify SHA256, place it in 'password_sha256_hex' element.
+                 Example: <password_sha256_hex>65e84be33532fb784c48129675f9eff3a682b27168c0ea744b2cf58ee02337c5</password_sha256_hex>
+                 Restrictions of SHA256: impossibility to connect to ClickHouse using MySQL JS client (as of July 2019).
+
+                 If you want to specify double SHA1, place it in 'password_double_sha1_hex' element.
+                 Example: <password_double_sha1_hex>e395796d6546b1b65db9d665cd43f0e858dd4303</password_double_sha1_hex>
+
+                 If you want to specify a previously defined LDAP server (see 'ldap_servers' in the main config) for authentication,
+                  place its name in 'server' element inside 'ldap' element.
+                 Example: <ldap><server>my_ldap_server</server></ldap>
+
+                 If you want to authenticate the user via Kerberos (assuming Kerberos is enabled, see 'kerberos' in the main config),
+                  place 'kerberos' element instead of 'password' (and similar) elements.
+                 The name part of the canonical principal name of the initiator must match the user name for authentication to succeed.
+                 You can also place 'realm' element inside 'kerberos' element to further restrict authentication to only those requests
+                  whose initiator's realm matches it.
+                 Example: <kerberos />
+                 Example: <kerberos><realm>EXAMPLE.COM</realm></kerberos>
+
+                 How to generate decent password:
+                 Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha256sum | tr -d '-'
+                 In first line will be password and in second - corresponding SHA256.
+
+                 How to generate double SHA1:
+                 Execute: PASSWORD=$(base64 < /dev/urandom | head -c8); echo "$PASSWORD"; echo -n "$PASSWORD" | sha1sum | tr -d '-' | xxd -r -p | sha1sum | tr -d '-'
+                 In first line will be password and in second - corresponding double SHA1.
+            -->
+            <password></password>
+
+            <!-- List of networks with open access.
+
+                 To open access from everywhere, specify:
+                    <ip>::/0</ip>
+
+                 To open access only from localhost, specify:
+                    <ip>::1</ip>
+                    <ip>127.0.0.1</ip>
+
+                 Each element of list has one of the following forms:
+                 <ip> IP-address or network mask. Examples: 213.180.204.3 or 10.0.0.1/8 or 10.0.0.1/255.255.255.0
+                     2a02:6b8::3 or 2a02:6b8::3/64 or 2a02:6b8::3/ffff:ffff:ffff:ffff::.
+                 <host> Hostname. Example: server01.clickhouse.com.
+                     To check access, DNS query is performed, and all received addresses compared to peer address.
+                 <host_regexp> Regular expression for host names. Example, ^server\d\d-\d\d-\d\.clickhouse\.com$
+                     To check access, DNS PTR query is performed for peer address and then regexp is applied.
+                     Then, for result of PTR query, another DNS query is performed and all received addresses compared to peer address.
+                     Strongly recommended that regexp is ends with $
+                 All results of DNS requests are cached till server restart.
+            -->
+            <networks>
+                <ip>::/0</ip>
+            </networks>
+
+            <!-- Settings profile for user. -->
+            <profile>default</profile>
+
+            <!-- Quota for user. -->
+            <quota>default</quota>
+
+            <!-- User can create other users and grant rights to them. -->
+            <!-- <access_management>1</access_management> -->
+        </default>
+    </users>
+
+    <!-- Quotas. -->
+    <quotas>
+        <!-- Name of quota. -->
+        <default>
+            <!-- Limits for time interval. You could specify many intervals with different limits. -->
+            <interval>
+                <!-- Length of interval. -->
+                <duration>3600</duration>
+
+                <!-- No limits. Just calculate resource usage for time interval. -->
+                <queries>0</queries>
+                <errors>0</errors>
+                <result_rows>0</result_rows>
+                <read_rows>0</read_rows>
+                <execution_time>0</execution_time>
+            </interval>
+        </default>
+    </quotas>
+</clickhouse>
+
diff --git a/docker-compose.yml b/docker-compose.yml
index f88d74d..b0d34df 100644
--- a/docker-compose.yml
+++ b/docker-compose.yml
@@ -229,6 +229,10 @@ services:
         read_only: true
         source: ./clickhouse/config.xml
         target: /etc/clickhouse-server/config.d/sentry.xml
+      - type: bind
+        read_only: true
+        source: ./clickhouse/users.xml
+        target: /etc/clickhouse-server/users.xml
     environment:
       # This limits Clickhouse's memory to 30% of the host memory
       # If you have high volume and your search return incomplete results

@Evergrin
Copy link

Hello! Have the same issue during upgrade from 23.11 to 24.8.0 according to instructions on page https://develop.sentry.dev/self-hosted/releases/.
So I have upgraded my sentry to v23.11.0, but then clickhouse migrations failed, and clickhouse container fails to start. And I surely can't rollback to previous version, version 23.11 crashes too. So, what should I do?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Waiting for: Product Owner
Status: No status
Development

No branches or pull requests

5 participants