Skip to content

get page content from facebook page. Using selenium and beautiful soup.

Notifications You must be signed in to change notification settings

JAmoMES/web_crawler_infinite_scroll

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

web crawler infinite scroll

get post content in ondimand page from facebook using selenium and beautifulsoup

getting start

1. Install all packets or run first cell on script.ipynb

Window

pip install python-dotenv beautifulsoup4 selenium pandas numpy

MacOS, Linux

pip3 install python-dotenv beautifulsoup4 selenium pandas numpy

2. create ans setup .env file

create .env

touch .env

set environment variables

USERNAME={your-username}
PASSWORD={your-password}

3. install driver

download your driver here https://sites.google.com/chromium.org/driver/downloads?authuser=0

move driver file to the root of the directory in this project.

4. run script.ipynb

run script.ipynb on your notebook

5. get result in post_data.csv

get result as .csv file in post_data.csv

About

get page content from facebook page. Using selenium and beautiful soup.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published