TLDR duuZ faced issues with log retention and login in Clickhouse. nitya-signoz provided solutions for log cleanup, PVC increase, and running one replica of the query service.
As of now, it’s not possible via the helm chart. What do you mean by `or do we have a way to clear that vial clickhouse command` ? do you want to remove ttl?
actually, my pvc is full and it cannot be increased as its limit is reached. Whole signoz system is stuck because DB is not responding because of the disk issue. So if i clear the logs i think i can login from the UI
If you are able to exec into clickhouse then try deleting the data ```kubectl exec -n platform -it chi-my-release-clickhouse-cluster-0-0-0 -- sh clickhouse client use signoz_logs; truncate table signoz_logs.logs;```
okay thanks, logs will be created automatically in the next run
yeah, once you are done cleaning up, do set up retention period from the UI.
You can also push your data to s3 for cold storage
nitya-signoz Actually it didnt help, im not able to login to signoz with admin account it says user does not exists. Can you help
i enable the s3 cold storage and in the middle of everything i believe because my data is huge it stuck again, Now im not able to login from admin account. Do you know in which table we are storing these user details
and anlso is there any way to refresh all the data like logs traces and metrices
hen i try with admin user it say account doesn't exists, but i can see all the user in the sqlite DB. When enabling s3 cold storage is there any migration scripts that we can run manually, just to make sure everything moved correctly, As i believe because the data is huge when i enable the s3 cold storage and somehow process stopped, it stopped the migration script. Any help guys..
<@4K15aa> any idea about the admin user issue?
You can’t move things manually to s3, it’s done by clickhouse internally. You can just check the current running process in clickhouse. you can check them by `SELECT query_id, query FROM system.processes;` It will show you if the ttl is applied or not
How much storage have you allocated to clickhouse and how much data are you ingesting ?
initially 200 gb
and the ingestion is around 300 GB i gues
Is it 300 GB per day?
no iniatilly we are dumping some data to test, daily 10-15 GB i guess
Okay just to clear out you are facing two issue • Not able to login • ClickHouse disk full right ?
yes not able to login, for clickhouse ive incresed the pvc to 500 with AWS support.
can do a huddle for the login issue
i think its working now, im able to login and able to see few items after increasing pvc
Cool that’s great.
also i truncate a table in signoz_traces, will it recreated in next run
To stop ingestion for somtime, you can stop the otel collector and the otel-collector metrics.
Then run truncate, also you can change TTL at this point of time and configure S3 as it will be fast.
yes that is what i did yesterday. It is stopped for now.
As of now it look like something happened when i enabled s3 cold storage. sometimes im able to see things and sometime i'm getting authorization error while accessing the api's from frontend.
Can you check if any of the pods are restarting ?
no its not restarting, but i deployed the new setup and sometime im able to see thing and sometimes it logging me out
aby idea
looks like when we are running the setup with replica count 2, with some calls it gives 403
ahh as of now you will have to run one replica of query service
yes did the same and it works, thanks
duuZ
Wed, 01 Mar 2023 04:30:49 UTCHi team, One quick question is there a way to implement retention of logs data via helm chart and not fromt the UI, or do we have a way to clear that vial clickhouse command?