You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Operating System version: Firebase Functions for Python (v2), Cloud Run
Firebase SDK version: firebase-admin==6.4.0
Firebase Product: firestore
Python version: 3.11
Pip version: 23.0.1
Description
Context
We've noticed this issue on our production and dev environments in Firebase, and launched an investigation together with GCP support (Support Case 49715474). GCP acknowledged the strange behaviour, but they were eventually unhelpful, offering generic advice on how it can be partially mitigated, which does not work in our use case, and it doesn't solve the issue. I continued the investigation on my own, and I think I pinned down the cause.
The issue seems to be rather generic and in my opinion very serious (silent failure -> no recovery until container goes out of scope), and happens for at least 1 use case which I presume is common in Firebase Functions: reading a document in Firestore using firebase_admin library. Since it seems one of the major use cases for firebase_admin, I decided to raise an issue also here.
I'd be very grateful for any suggestions to solve or mitigate this issue, as we've completely blocked on that right now.
Python function is silently freezing on an attempt to read a document in firestore (firestore.client().document.get()) if multiple invocations were requested in a short period of time (in my tests I used as low as 5 invocations submitted as quickly as possible).
Further attempts to invoke the function do not produce any result.
This behaviour is observed in both http-triggered functions and firestore-triggered functions.
Reproduction and results
Setup
Standard Firebase Functions for Python setup, as suggested by the official documentation.
In the section below I provide minimal code to reproduce the behaviour. I provide the code for http-triggered function, as it is easier to test (for firestore-triggered need to get service key to write to db, etc; but the behavior is exactly the same).
I deploy the function to a Firebase Project (with Blaze plan).
In Firestore, create a collection "test" with document "testDoc" (and any content, e.g. {"test" : "test"}). I did it manually through Firebase Console. However, in reality this is not necessary, as the issue is reproducible even without it.
In Cloud Run -> open Function -> Security -> Allow unauthenticated invocations
Summary:
Total: 20.0057 secs
Slowest: 0.0000 secs
Fastest: 0.0000 secs
Average: NaN secs
Requests/sec: 0.4999
Response time histogram:
Latency distribution:
Details (average, fastest, slowest):
DNS+dialup: NaN secs, 0.0000 secs, 0.0000 secs
DNS-lookup: NaN secs, 0.0000 secs, 0.0000 secs
req write: NaN secs, 0.0000 secs, 0.0000 secs
resp wait: NaN secs, 0.0000 secs, 0.0000 secs
resp read: NaN secs, 0.0000 secs, 0.0000 secs
Status code distribution:
Error distribution:
[10] Get "https://http-trig-read-fs-l-<...>-uc.a.run.app": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
As you can see, the endpoint is unresponsive. In the Cloud Run logs we can see the issue (I am posing the end of the log, there are no further log statements in response to this run:
The container becomes completely unresponsive, and functions run until they time out (can be seen in Cloud Run metrics -- run times just above 1 minute is when 1 did this load test):
However, if after the container goes out of score, or the function is redeployed to renew the container revision, I first try to make a single request to "wake up" the container, it can then process very high loads.
Same issue. Until this gets fixed, to avoid the timeout errors I had to set the minimum number of instances to 1 (which incurs an additional cost with GCP).
Also, the timeouts were commonly associated with with various internet bots invoking the functions, with several requests being generated during a cold start before the first request could be fulfilled. So, @antopolskiy's analysis and use of hey to generate a small load is accurate and essential to recreate the problem.
@dblake10 thanks! I want to note that having a min_instances=1 doesn't guarantee absence of cold starts -- depending on the number of requests, additional container could be spun up which will in turn fail some of the requests. This is especially evident when the environment become more complex and the functions do more stuff. In conclusion, in our production code, min_instances=1 did not solve the issue.
Environment
Description
Context
We've noticed this issue on our production and dev environments in Firebase, and launched an investigation together with GCP support (Support Case 49715474). GCP acknowledged the strange behaviour, but they were eventually unhelpful, offering generic advice on how it can be partially mitigated, which does not work in our use case, and it doesn't solve the issue. I continued the investigation on my own, and I think I pinned down the cause.
The issue seems to be rather generic and in my opinion very serious (silent failure -> no recovery until container goes out of scope), and happens for at least 1 use case which I presume is common in Firebase Functions: reading a document in Firestore using
firebase_admin
library. Since it seems one of the major use cases forfirebase_admin
, I decided to raise an issue also here.I'd be very grateful for any suggestions to solve or mitigate this issue, as we've completely blocked on that right now.
Here is a reference to the issue on
firebase-functions-python
: firebase/firebase-functions-python#181What happens
firestore.client().document.get()
) if multiple invocations were requested in a short period of time (in my tests I used as low as 5 invocations submitted as quickly as possible).This behaviour is observed in both http-triggered functions and firestore-triggered functions.
Reproduction and results
Setup
Standard Firebase Functions for Python setup, as suggested by the official documentation.
In the section below I provide minimal code to reproduce the behaviour. I provide the code for http-triggered function, as it is easier to test (for firestore-triggered need to get service key to write to db, etc; but the behavior is exactly the same).
I deploy the function to a Firebase Project (with Blaze plan).
npx firebase-tools --version
12.5.3
npx firebase-tools deploy --only functions:functions-python
Additional setup:
Reproduction
I use hey to generate requests.
hey -n 10 -c 10 https://http-trig-read-fs-l-<...>.a.run.app
As you can see, the endpoint is unresponsive. In the Cloud Run logs we can see the issue (I am posing the end of the log, there are no further log statements in response to this run:
The container becomes completely unresponsive, and functions run until they time out (can be seen in Cloud Run metrics -- run times just above 1 minute is when 1 did this load test):
However, if after the container goes out of score, or the function is redeployed to renew the container revision, I first try to make a single request to "wake up" the container, it can then process very high loads.
hey -n 1 -c 1 https://http-trig-read-fs-l-<...>-uc.a.run.app
hey -n 100 -c 100 https://http-trig-read-fs-l-<...>-uc.a.run.app
Relevant Code:
firebase.json
functions-python/requirements.txt
functions-python/main.py
(I added print statements before every code line so that it is apparent where the function freezes)
The text was updated successfully, but these errors were encountered: