Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Provide full logs #2155

Open
xzmeng opened this issue Nov 11, 2024 · 1 comment
Open

Provide full logs #2155

xzmeng opened this issue Nov 11, 2024 · 1 comment

Comments

@xzmeng
Copy link
Contributor

xzmeng commented Nov 11, 2024

Is your feature request related to a problem? Please describe.

The current freqUI Logs displays the most recent 500-1000 lines, which is sufficient for monitoring the bot's activity in real-time and over the past few hours. However this may be inadequate if you need to review the bot's status over a longer period or at specific moments in the past.

While users can achieve this by connecting to the server via SSH or implementing their custom solutions, it would be highly convenient if freqUI provided this functionality out of the box.

Describe the solution you'd like

  • View the complete logs
  • Filter logs by log level
  • Navigate logs by date
@xmatthias
Copy link
Member

xmatthias commented Nov 11, 2024

From an idea perspective, it's surely a nice to have
However, there's multiple issues in this - which makes this rather unfeasible from a backend perspective.

Currently - logs are stored in-process - and are rotated out at around 1000 lines iirc - so increasing this to "unlimited" would introduce an (in this case intentional) memory leak - which could end up crashing the bot once it actually runs out of memory (we can't control the verbosity - or what logs a strategy writes).

secondly, we can't rely on the user using --log-file - and i don't intend to force this upon users (it'll consume up to 100 Mb with log rotation - with no real reason if you don't actually need this).
Without this - log is not persisted outside of the in-process storage (and the terminal it's outputted) - so that's not a reliable / feasible solution.


Lastly (this is the frontend one) - assuming we have log-files available - you can't really fetch 100Mb of logs (of anything, really) into a frontend process and expect the browser / UI to still perform.
While filtering could be moved to the backend - i'm not sure that's reasonable depending on what the user's goal is.

If that user is just browsing through the logs - you'd you'd need time-based pagination based on ever-changing logs (The bot will continue to write logs), including eventual log rotation (which means you'll potentially load 10x 10MB files to get to the timestamp your'e looking for).
Now that's obviously per call - so if we assume we get 1k lines per entry (an amount we know still performs roughly well) - you'd do rouhly 60 calls per file to paginate through (obviously on user scroll) - but each of these 60 calls - you'll need to load up to all10 files to get the right 1000 lines.

Filtering get's actually worse - as a filter could return anything between 1 and 1000nds of rows (it's more like 600k, but still) - so you'd again need to filter and paginate - which again would mean loading the data from disk on every call (we don't want to keep this in memory for sure - we won't know if the user needs it, and it may be outdated by the time it's requested again)


The idea - great
the effort required - compared to the benefit it brings to the users - i'm not that certain if our focus shouldn't be on other things first ...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants