Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added simple HTTP API in order to obtain deeplinks #141

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

mike-lambert
Copy link

  • Moved from Alpine to Ubuntu
  • Added simple HTTP handler in order to obtain deeplinks by network, not by docker logs
  • Port numbers moved to constants
  • Added two new launch modes for fake TLS obfuscation

@alexbers
Copy link
Owner

Thanks for the request. I see some issues with it:

  • The Alpine changed to Ubuntu without motivation part. The Alpine used because it is tiny in terms of drive space and memory consumption
  • The docker image is run as root user, which can be insecure
  • the ENTRYPOINT is used instead of RUN, I prefer RUN because it allows to enter to the container using exec for debugging purposes
  • the copying all files inside the container break the feature, where you can modify the config and send the SIGUSR2 to the proxy, it will reapply it, also it will require rebuilding the container on every config modification
  • I didn't get the idea with server API. As I can see in the source a special port is opened which gives the proxy links as json. The proxy can be detected by scanning ports for such service or by looking at the traffic
  • the basic HTTPServer can handle only one client a time, if some client establish the httpd connection, the other ones will be blocked
  • I didn't get why the two more launch way for specific configuration are needed. There is a universal way to specify custom config with ./mtprotoproxy.py myconfig. There is undocumented launch way which is useful when proxy is installed using pip
  • API_CONFIG['PORT'] = config['PORT'] - this is not used furhter
  • I thought about generating the secret per run but in this case if the server is restarted, it is no accessible anymore, because the new secret is generated. Instead the specific api for the secret obtaining, which likely will make the proxy detectable, I would advice to generate the secret by some rule, like sha1(proxy_ip+proxy_port+some_string). In this case it would look random, but be known

@mike-lambert
Copy link
Author

OK, the objections according base image, single-threaded HTTP server, ENTRYPOINT instead of RUN, and unused variables are reasonable. Let discuss the rest of, and the use case.

  • The docker image is run as root user. Yes, but when we run image, which contain all of intended files and maps nothing to host file system, you shouldn't care about root - your software already bounded by container and almost couldn't escape. Leaving executable on host and limiting
  • Also, restarting by SIGUSR2 is misuse of docker concept IMHO.

Well, let me explain the use case. I use some software to make publicly available censorship circumvention services.
Docker used as suitable deployment facility, as it has been designed - configuration passed from management tool from host to container by environment, software packed into image and running inside container without any interaction with host files.

After launch I need to obtain link, which clients should use in order to connect. There is three approaches:

  1. Re-build link outside of container by provided configuration parameters. This is not good due code duplication (the same facility also exists in your code).
  2. Parse container log. This is unreliable.
  3. Get it from software running inside container somehow. That I tried to implement in this PR. Yes, it make running server more visible, so adding some kind of protection (i.e. auth token) is reasonable also.
    So, the main goal is making dockered implementation friendly to cloud deployments for public use

I hope we could reach kind of agreement and close this PR in order to continue with discussed improvements and fixes.

@seriyps
Copy link

seriyps commented Sep 18, 2019 via email

@mike-lambert
Copy link
Author

mike-lambert commented Sep 18, 2019

Handling multiple IPs already done by existing code.
P.S. Take a look at http://1.helsinki.proxy.cyfrant.com:4005/ or http://1.helsinki.proxy.cyfrant.com:4007/

@alexbers
Copy link
Owner

alexbers commented Sep 20, 2019

Recently I implemented http-interface to generate metrics for Prometheus.

Now the link info can be also obtained using this interface.

Example (strings to add in config.py):

METRICS_PORT = 5555
METRICS_EXPORT_LINKS = True
METRICS_WHITELIST = ["127.0.0.1", "1.2.3.4"]

The links can be obtained with:
curl -q http://127.0.0.1:5555/ | grep -Po 'tg://[^"]+' | cat

@pouryare
Copy link
Contributor

@alexbers How can I access to the http-interface by my PC?

@alexbers
Copy link
Owner

Just add your ip address in METRICS_WHITELIST and you will be able to access it with browser or curl.

@pouryare
Copy link
Contributor

I added my IP to METRICS_WHITELIST. Then I tried to access it by my_server_ip:5555. But it didn't work.

@alexbers
Copy link
Owner

Please make sure that you are using the latest version from master branch

@pouryare
Copy link
Contributor

I updated it 5 min ago

@alexbers
Copy link
Owner

Did you added these strings?

METRICS_PORT = 5555
METRICS_EXPORT_LINKS = True

@pouryare
Copy link
Contributor

Yes. I can access it by curl in server.

@alexbers
Copy link
Owner

So everything is good now?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants