Skip to content

Commit

Permalink
Merge pull request #226 from DocShow-AI/create_table_with_dataprofile
Browse files Browse the repository at this point in the history
Create table with dataprofile
  • Loading branch information
liberty-rising authored Jan 21, 2024
2 parents 0a32f32 + df4231e commit dbb601b
Show file tree
Hide file tree
Showing 9 changed files with 71 additions and 77 deletions.
2 changes: 1 addition & 1 deletion .vscode/settings.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"[python]": {
"editor.codeActionsOnSave": {
"source.organizeImports": true
"source.organizeImports": "explicit"
},
"editor.defaultFormatter": "ms-python.black-formatter",
"editor.formatOnSave": true
Expand Down
24 changes: 24 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,3 +73,27 @@ docker-compose logs backend
```

That's it! You're now up and running with your development environment.

### Database Migrations

For any change to the schema you should create a database migration using alembic.

Generate a new alembic version:
```bash
cd backend
alembic revision -m "{description of revision}"
```

Your revision will be located within `/backend/alembic/versions`
Edit the generated revision file with your changes, make sure to specify both upgrade and downgrade function for reverse compatibility.
After making changes to your revision file, apply them using:
```bash
alembic upgrade head
```

If you deleted volumes and recreated the containers, you must point alembic to the latest version.
This is because the backend automatically creates the latest schema for you.
You can do this by running the following command:
```bash
alembic stamp head
```
6 changes: 1 addition & 5 deletions backend/Dockerfile.dev
Original file line number Diff line number Diff line change
Expand Up @@ -13,15 +13,11 @@ RUN apt-get update && \
apt-get clean && \
rm -rf /var/lib/apt/lists/*

# Copy wait-for.sh into the container
COPY wait-for.sh /wait-for.sh
RUN chmod +x /wait-for.sh

# Install any needed packages specified in requirements.txt
RUN pip install --trusted-host pypi.python.org -r requirements.txt

# Make port 8000 available to the world outside this container
EXPOSE 8000

# Define the command to run your app using uvicorn
CMD ["/wait-for.sh", "postgres_db", "5432", "--", "uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000", "--reload"]
2 changes: 1 addition & 1 deletion backend/alembic.ini
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
[alembic]
# ... other settings ...
sqlalchemy.url = driver://user:pass@localhost/dummy
sqlalchemy.url = postgresql://admin:[email protected]:5432/db
script_location = alembic
46 changes: 33 additions & 13 deletions backend/alembic/env.py
Original file line number Diff line number Diff line change
Expand Up @@ -7,23 +7,43 @@
sys.path.append(os.path.dirname(os.path.dirname(os.path.abspath(__file__))))

from models.base import Base # noqa: E402
from settings import APP_ENV, DATABASE_URL # noqa: E402


def run_migrations_online():
connectable = context.config.attributes.get("connection")

# For app database
if connectable is None:
db_url = context.get_x_argument(as_dictionary=True).get("db", None)
if db_url:
connectable = engine_from_config(
context.config.get_section(context.config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
url=db_url,
)
# Run migrations for app database
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=Base)
with context.begin_transaction():
context.run_migrations()
if APP_ENV == "dev":
db_url = "postgresql://admin:[email protected]:5432/db"
else:
db_url = DATABASE_URL
connectable = engine_from_config(
context.config.get_section(context.config.config_ini_section),
prefix="sqlalchemy.",
poolclass=pool.NullPool,
url=db_url,
)
# Run migrations for app database
with connectable.connect() as connection:
context.configure(connection=connection, target_metadata=Base)
with context.begin_transaction():
context.run_migrations()


def run_migrations_offline():
context.configure(url=DATABASE_URL)

with context.begin_transaction():
context.run_migrations()


def run_migrations():
if context.is_offline_mode():
run_migrations_offline()
else:
run_migrations_online()


run_migrations()
2 changes: 1 addition & 1 deletion backend/envs/dev/initialization/setup_dev_environment.py
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ def create_sample_dataprofile():
name="Sample Profile",
file_type="pdf",
organization_id=1,
description="Sample Description",
extract_instructions="Sample extract instructions",
)
# Using DatabaseManager to manage the database session
with DatabaseManager() as session:
Expand Down
43 changes: 0 additions & 43 deletions backend/wait-for.sh

This file was deleted.

16 changes: 10 additions & 6 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,9 +10,6 @@ services:
condition: service_healthy
env_file:
- ./backend/.env
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
ports:
- "8000:8000"
volumes:
Expand All @@ -34,10 +31,17 @@ services:
environment:
POSTGRES_USER: admin
POSTGRES_PASSWORD: admin
POSTGRES_DB: db
ports:
- "5432:5432"
volumes:
- pgdata:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U admin"]
test: ["CMD-SHELL", "pg_isready -U admin -d db"]
interval: 30s
timeout: 30s
retries: 3
volumes:
- ./initdb:/docker-entrypoint-initdb.d # Added line for initialization script

volumes:
pgdata:

7 changes: 0 additions & 7 deletions initdb/init.sh

This file was deleted.

0 comments on commit dbb601b

Please sign in to comment.