-
Notifications
You must be signed in to change notification settings - Fork 17
TEAM 1 WORK IN PROGRESS
- "Daily" Scrum Meetings every Tuesday at 9:00 AM here and every Thursday at 12:00 PM here
- Sprint Review & Planning every even calendar week on Wednesday at 12:00 PM here
- adhoc meetings are often planned via our gitter channel
Collection of all relevant Links
-
PR of the last sprint
- Should we merge directly to the pr branch or make a PR?
- Commit directly to the branch
- Marisa already started fixing Franziska's requested changes
- Little changes requested from jury and Milos
- Everybody fixes their own work? YES
- Marisa and Claas will discuss the specific changes of Marisa's work
-
Installing via setup.py did that work for anyone? (import error)
- marisa did the imports as in ART
- importing modules works normally like that
- create an issue
-
Actions And branches?
- call the branch only team1?
- nobody can push on overlapping branch names
- so no.
- create different branches for sprints
- running tests for every branch?
- might be annoying for team2
- can't they then just ignore the tests?
- tests take a really long time (since attack models are trained)
- Milos will look into that
- call the branch only team1?
-
Assigning Issues/Work
- Little Tasks:
- Create team1sprint3 branch -> @marisanest
- Create Wiki Page for work documentation and literature
- fixing setup.py -> issue
- create separate Webex room -> @blauertee
- GitHub actions and branches -> @budmil
- Issues:
- See assignments in the issues
- privacy score calculation should happen in the metrics class
- visualisation as in here should be possible
- Two many issues about MIA on Single data points?
- Nah ... just close all of them with one PR
- writing all the texts for the README
- renaming
model_card_info
- not that important
- will also require adjustments in the notebooks
- metrics output as JSON
- goes to the metrics class
- only calculate that once, because these values do not change
- Little Tasks:
Trying to write up and easy to understan definition which is w&p as u can c :) Definition:
In our MIA blackbox attack, we supply train and test data to train shadow models. Which are not the same es the target classifiers train and test data to simulate an actual attackers behaviour. For the purpose of measuring how successful the attack was (or how vulnerable the target classifier is) these results should be compared at some point to the actual train and test data of the target classifier. But for our attacks only one set of train and test data is ever supplied. This feels contradictory to me. What am I Missing?