Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dropped Courses #2262

Draft
wants to merge 52 commits into
base: main
Choose a base branch
from
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
52 commits
Select commit Hold shift + click to select a range
af389cc
add todos
fekoch Jul 15, 2024
47ba26a
add types
fekoch Jul 15, 2024
8da669d
use icon instead of button
fekoch Aug 5, 2024
8bdb847
drop button
fekoch Aug 5, 2024
e8639ae
drop button via extra template
fekoch Aug 5, 2024
90c1ef8
implement modal
fekoch Aug 5, 2024
ff52040
allow_drop_out setting on evaluations
fekoch Aug 5, 2024
4783dc5
[wip] dropout questionnaire
fekoch Aug 5, 2024
f3e4b68
[wip] query dropout questionnaire
fekoch Aug 12, 2024
55a618c
add DROPOUT Questionnaire type
fekoch Oct 7, 2024
5c196f3
[wip] dropout form
fekoch Oct 7, 2024
bdc96c3
[wip] select default dropout
fekoch Oct 7, 2024
230e649
[wip] set_active_dropout
fekoch Oct 14, 2024
25f12b6
remove superfluous space
fekoch Oct 21, 2024
c9e44db
fix whitespace in template
fekoch Oct 21, 2024
3593c8a
add test
fekoch Oct 21, 2024
9f17fdf
test set_active_dropout
fekoch Oct 21, 2024
fd5e136
change to ajax
fekoch Oct 21, 2024
de59f5f
!fixup fix migration
fekoch Oct 21, 2024
291474e
select default for top/bottom questionnaires
fekoch Oct 28, 2024
1d89b89
resolve comment
fekoch Nov 4, 2024
01132e5
rebase migration
fekoch Nov 4, 2024
8334c77
UI test for /drop/id route
fekoch Nov 4, 2024
1fe5b22
manage.py formati
fekoch Nov 4, 2024
8fe2deb
set initial via initial=
fekoch Nov 4, 2024
f459319
prevent Dropout questionnaires from being set to a different type
fekoch Nov 11, 2024
3c86f93
merge view logic with vote-view
fekoch Nov 11, 2024
2553ec0
run formatter
fekoch Nov 11, 2024
778c36f
add todos
fekoch Nov 11, 2024
e8f6155
add test to make sure if dropout questionnaire is not included
fekoch Nov 11, 2024
f72f4a9
remove dropout questionnaires from average calculation
fekoch Nov 25, 2024
40927b8
precommit
fekoch Nov 25, 2024
bc082b5
inject dropout questionnaire inside get_vote_page_form_groups
fekoch Nov 25, 2024
f26e906
make dropout-questionnaire un-editable by managers after there are an…
fekoch Nov 25, 2024
5174138
don't show used counter for dropout questionnaires
fekoch Nov 25, 2024
f3b82da
add is_dropout_questionnaire property
fekoch Nov 25, 2024
ac8bfb8
fix missing dropout questionnaires
fekoch Nov 25, 2024
423a9aa
resolve todo
fekoch Nov 25, 2024
dbe39a9
make sure if there are dropout-q's, set one as active
fekoch Nov 25, 2024
51b98ae
add dropout count
fekoch Nov 25, 2024
894c70f
remove prefetch thoughts
fekoch Nov 25, 2024
4b9c4e7
can_be_deleted prop for DROPOUT
fekoch Nov 25, 2024
309c13f
add todo
fekoch Nov 25, 2024
d060b8b
test for dropout counter and allow_drop_out
fekoch Dec 2, 2024
7154858
handle ajax errors
fekoch Dec 2, 2024
f7d0ca1
add explanatory texts
fekoch Dec 2, 2024
bd2ae04
prevent HttpRequest typing issue
fekoch Dec 2, 2024
5f5234b
fix too many local variables
fekoch Dec 2, 2024
d4f4080
change naming to placate linter
fekoch Dec 2, 2024
0f5c592
add dropout questionnaire to testdata
fekoch Dec 16, 2024
502aa31
add todos
fekoch Dec 16, 2024
977a55d
wip dropout result view
fekoch Dec 16, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
369 changes: 366 additions & 3 deletions evap/development/fixtures/test_data.json

Large diffs are not rendered by default.

Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
# Generated by Django 5.1.1 on 2024-10-14 20:03

from django.db import migrations, models


class Migration(migrations.Migration):

dependencies = [
("evaluation", "0147_unusable_password_default"),
]

operations = [
migrations.AddField(
model_name="evaluation",
name="allow_drop_out",
field=models.BooleanField(default=True, verbose_name="allow students to drop out"),
),
migrations.AddField(
model_name="questionnaire",
name="is_active_dropout_questionnaire",
field=models.BooleanField(
blank=True, default=None, null=True, unique=True, verbose_name="questionnaire is selected as active"
),
),
migrations.AlterField(
model_name="questionnaire",
name="type",
field=models.IntegerField(
choices=[
(10, "Top questionnaire"),
(20, "Contributor questionnaire"),
(30, "Bottom questionnaire"),
(40, "Dropout questionnaire"),
],
default=10,
verbose_name="type",
),
),
migrations.AddField(
model_name="evaluation",
name="dropout_count",
field=models.IntegerField(default=0, verbose_name="dropout count"),
),
]
49 changes: 48 additions & 1 deletion evap/evaluation/models.py
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,12 @@ def general_questionnaires(self):
def contributor_questionnaires(self):
return super().get_queryset().filter(type=Questionnaire.Type.CONTRIBUTOR)

def dropout_questionnaires(self):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you're touching these and it's no trouble, it would probably be nice if you could also add the return type annotation (-> QuerySet[Questionnaire] or whatever it should be)

(same for the views below that return HttpResponse. It's a bit annoying, but for backwards compatibility, no annotation always means "Any", even if the method always returns some specific type. I wonder if mypy issue 4409 will one day be implemented.)

return super().get_queryset().filter(type=Questionnaire.Type.DROPOUT)

def active_dropout_questionnaire(self):
return super().get_queryset().filter(type=Questionnaire.Type.DROPOUT, is_active_dropout_questionnaire=True)


class Questionnaire(models.Model):
"""A named collection of questions."""
Expand All @@ -172,6 +178,11 @@ class Type(models.IntegerChoices):
TOP = 10, _("Top questionnaire")
CONTRIBUTOR = 20, _("Contributor questionnaire")
BOTTOM = 30, _("Bottom questionnaire")
DROPOUT = 40, _("Dropout questionnaire")

# TODO@Felix: switch logic to add the Dropout Questionnaire to general_contribution, if allow_dropout is true
# maybe even allow_dropout <=> exists questionnaire in general_contribution with questionnaire.type=Dropout
# TODO@Felix: (?) allow selecting Dropout-Questionnaire when creating evaluation

type = models.IntegerField(choices=Type.choices, verbose_name=_("type"), default=Type.TOP)

Expand All @@ -193,6 +204,12 @@ class Type(models.IntegerChoices):

order = models.IntegerField(verbose_name=_("ordering index"), default=0)

# (unique=True, blank=True, null=True) allows having multiple non-active but only one active questionnaire
# TODO@Felix: hidden True?
is_active_dropout_questionnaire = models.BooleanField(
default=None, unique=True, blank=True, null=True, verbose_name=_("questionnaire is selected as active")
)

class Visibility(models.IntegerChoices):
HIDDEN = 0, _("Don't show")
MANAGERS = 1, _("Managers only")
Expand Down Expand Up @@ -224,6 +241,15 @@ def __lt__(self, other):
def __gt__(self, other):
return (self.type, self.order, self.pk) > (other.type, other.order, other.pk)

@transaction.atomic()
def set_active_dropout(self):
if not self.is_dropout_questionnaire:
raise ValueError("Can only set DROPOUT-type Questionnaires as active dropout questionnaire.")

Questionnaire.objects.active_dropout_questionnaire().update(is_active_dropout_questionnaire=None)
self.is_active_dropout_questionnaire = True
self.save()

@property
def is_above_contributors(self):
return self.type == self.Type.TOP
Expand All @@ -233,7 +259,20 @@ def is_below_contributors(self):
return self.type == self.Type.BOTTOM

@property
def can_be_edited_by_manager(self):
def is_dropout_questionnaire(self):
return self.type == self.Type.DROPOUT

@property
def can_be_edited_by_manager(
self,
):
if self.is_dropout_questionnaire:
assert set(Answer.__subclasses__()) == {TextAnswer, RatingAnswerCounter}
return (
not TextAnswer.objects.filter(question__questionnaire=self).exists()
and not RatingAnswerCounter.objects.filter(question__questionnaire=self).exists()
)

if is_prefetched(self, "contributions"):
if all(is_prefetched(contribution, "evaluation") for contribution in self.contributions.all()):
return all(
Expand All @@ -244,6 +283,9 @@ def can_be_edited_by_manager(self):

@property
def can_be_deleted_by_manager(self):
if self.is_dropout_questionnaire:
return self.can_be_edited_by_manager

return not self.contributions.exists()

@property
Expand Down Expand Up @@ -455,6 +497,9 @@ class State(models.IntegerChoices):
)
_voter_count = models.IntegerField(verbose_name=_("voter count"), blank=True, null=True, default=None)

# TODO@Felix: UI for dropout count
dropout_count = models.IntegerField(verbose_name=_("dropout count"), default=0)

# when the evaluation takes place
vote_start_datetime = models.DateTimeField(verbose_name=_("start of evaluation"))
# Usually the property vote_end_datetime should be used instead of this field
Expand All @@ -463,6 +508,8 @@ class State(models.IntegerChoices):
# Disable to prevent editors from changing evaluation data
allow_editors_to_edit = models.BooleanField(verbose_name=_("allow editors to edit"), default=True)

allow_drop_out = models.BooleanField(verbose_name=_("allow students to drop out"), default=True)

evaluation_evaluated = Signal()

# whether to wait for grade uploading before publishing results
Expand Down
28 changes: 28 additions & 0 deletions evap/evaluation/tests/test_models.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
from datetime import date, datetime, timedelta
from unittest.mock import Mock, call, patch

import django.db.utils
from django.contrib.auth.models import Group
from django.core import mail
from django.core.cache import caches
Expand Down Expand Up @@ -1139,3 +1140,30 @@ class QuestionnaireTests(TestCase):
def test_locked_contributor_questionnaire(self):
questionnaire = baker.prepare(Questionnaire, is_locked=True, type=Questionnaire.Type.CONTRIBUTOR)
self.assertRaises(ValidationError, questionnaire.clean)

def test_two_active_dropout_questionnaires_not_allowed(self):
q1 = baker.prepare(Questionnaire, type=Questionnaire.Type.DROPOUT)
q2 = baker.prepare(Questionnaire, type=Questionnaire.Type.DROPOUT)

q1.is_active_dropout_questionnaire = True
q1.save()

with self.assertRaises(django.db.utils.IntegrityError):
q2.is_active_dropout_questionnaire = True
q2.save()

def test_set_active_removes_active_for_others(self):
q1 = baker.make(Questionnaire, type=Questionnaire.Type.DROPOUT)
q2 = baker.make(Questionnaire, type=Questionnaire.Type.DROPOUT)
q3 = baker.make(Questionnaire, type=Questionnaire.Type.DROPOUT)

self.assertEqual(Questionnaire.objects.dropout_questionnaires().count(), 3)
self.assertFalse(Questionnaire.objects.active_dropout_questionnaire().exists())

q1.set_active_dropout()
self.assertTrue(q1.is_active_dropout_questionnaire)

self.assertEqual(Questionnaire.objects.active_dropout_questionnaire().first(), q1)

q2.set_active_dropout()
q3.set_active_dropout()
102 changes: 101 additions & 1 deletion evap/evaluation/tests/test_views.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,16 @@
from django.urls import reverse
from model_bakery import baker

from evap.evaluation.models import Evaluation, Question, QuestionType, Semester, UserProfile
from evap.evaluation.models import (
NO_ANSWER,
Contribution,
Evaluation,
Question,
Questionnaire,
QuestionType,
Semester,
UserProfile,
)
from evap.evaluation.tests.tools import (
WebTest,
WebTestWith200Check,
Expand All @@ -14,6 +23,7 @@
store_ts_test_asset,
)
from evap.staff.tests.utils import WebTestStaffMode
from evap.student.tools import answer_field_id


class RenderJsTranslationCatalog(WebTest):
Expand Down Expand Up @@ -284,3 +294,93 @@ def test_reset_to_new(self) -> None:
self.reset_from_x_to_new(s, success_expected=True)
for s in invalid_start_states:
self.reset_from_x_to_new(s, success_expected=False)


class TestDropoutQuestionnaire(WebTest):
@classmethod
def setUpTestData(cls) -> None:
cls.user = baker.make(UserProfile, email="[email protected]")
cls.user2 = baker.make(UserProfile, email="[email protected]")
cls.evaluation = baker.make(
Evaluation, state=Evaluation.State.IN_EVALUATION, participants=[cls.user, cls.user2]
)

cls.q1 = baker.make(
Question,
type=QuestionType.POSITIVE_LIKERT,
)
cls.evaluation.general_contribution.questionnaires.add(cls.q1.questionnaire)

cls.q2 = baker.make(
Question,
type=QuestionType.NEGATIVE_YES_NO,
)
cls.contribution = baker.make(
Contribution,
contributor=baker.make(UserProfile, email="[email protected]"),
questionnaires=[cls.q2.questionnaire],
evaluation=cls.evaluation,
)

def assert_no_answer_set(self, form):
self.assertEqual(
form.fields[answer_field_id(self.evaluation.general_contribution, self.q1.questionnaire, self.q1)][0].value,
str(NO_ANSWER),
f"Rating questions in the general contribution should be set to NO_ANSWER (eg. {NO_ANSWER})",
)
self.assertEqual(
form.fields[answer_field_id(self.contribution, self.q2.questionnaire, self.q2)][0].value,
str(NO_ANSWER),
f"Rating questions in contributor questionnaires should be set to NO_ANSWER (eg. {NO_ANSWER})",
)

def test_no_dropout_questionnaire_chosen_does_still_work(self):
self.assertFalse(Questionnaire.objects.dropout_questionnaires().exists())
response = self.app.get(url=reverse("student:drop", args=[self.evaluation.id]), user=self.user, status=200)

self.assert_no_answer_set(response.forms["student-vote-form"])

def test_chosing_dropout_sets_to_no_answer(self):
dropout_questionnaire = baker.make(Questionnaire, type=Questionnaire.Type.DROPOUT)
dropout_question = baker.make(Question, type=QuestionType.TEXT, questionnaire=dropout_questionnaire)

dropout_questionnaire.set_active_dropout()
response = self.app.get(url=reverse("student:drop", args=[self.evaluation.id]), user=self.user, status=200)
form = response.forms["student-vote-form"]

self.assertIn(
answer_field_id(self.evaluation.general_contribution, dropout_questionnaire, dropout_question),
form.fields.keys(),
"The dropout Questionnaire should be shown",
)
self.assert_no_answer_set(form)

def test_allow_dropout_is_respected(self):
_ = self.app.get(url=reverse("student:vote", args=[self.evaluation.id]), user=self.user, status=200)
_ = self.app.get(url=reverse("student:drop", args=[self.evaluation.id]), user=self.user, status=200)

no_dropout_evaluation = baker.make(
Evaluation, state=Evaluation.State.IN_EVALUATION, participants=[self.user], allow_drop_out=False
)

_ = self.app.get(url=reverse("student:vote", args=[no_dropout_evaluation.id]), user=self.user, status=200)
_ = self.app.get(url=reverse("student:drop", args=[no_dropout_evaluation.id]), user=self.user, status=400)

def test_dropping_out_increments_dropout_counter(self):
self.assertEqual(self.evaluation.dropout_count, 0, "dropout_count should be initially zero")

form = self.app.get(url=reverse("student:drop", args=[self.evaluation.id]), user=self.user, status=200).forms[
"student-vote-form"
]
form.submit()
self.evaluation = Evaluation.objects.get(pk=self.evaluation.pk)

self.assertEqual(self.evaluation.dropout_count, 1, "dropout count should increment with dropout")

form = self.app.get(url=reverse("student:vote", args=[self.evaluation.id]), user=self.user2, status=200).forms[
"student-vote-form"
]
form.submit()
self.evaluation = Evaluation.objects.get(pk=self.evaluation.pk)

self.assertEqual(self.evaluation.dropout_count, 1, "dropout_count should not change on normal vote")
11 changes: 11 additions & 0 deletions evap/results/templates/results_evaluation_detail.html
Original file line number Diff line number Diff line change
Expand Up @@ -227,6 +227,17 @@ <h3>{{ evaluation.full_name }} ({{ evaluation.course.semester.name }})</h3>
</div>
{% endif %}

<!-- TODO@Felix: WIP Dropout view -->
<div class="card card-outline-primary">
<div class="card-header d-flex flex-row justify-content-between">
<span>{% translate 'Dropout' %}</span>
<span class="bg-secondary-outline">Dropout count: {{ evaluation.dropout_count }}</span>
</div>
<div class="card-body">
{% include 'results_evaluation_detail_questionnaires.html' with questionnaire_result=dropout_results %}
</div>
</div>

{# Leave some space for big tooltips #}
<div class="py-5 py-md-0"></div>
{% endblock %}
19 changes: 19 additions & 0 deletions evap/results/tests/test_tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -435,6 +435,25 @@ def test_calculate_average_course_distribution(self):
self.assertEqual(distribution[3], 0)
self.assertEqual(distribution[4], 0)

def test_dropout_questionnaires_are_not_included(self):
general_questionnaire = baker.make(Questionnaire, type=Questionnaire.Type.TOP)
general_question = baker.make(Question, questionnaire=general_questionnaire, type=QuestionType.GRADE)

dropout_questionnaire = baker.make(Questionnaire, type=Questionnaire.Type.DROPOUT)
dropout_question = baker.make(Question, questionnaire=dropout_questionnaire, type=QuestionType.GRADE)

contribution = baker.make(
Contribution, evaluation=self.evaluation, questionnaires=[general_questionnaire, dropout_questionnaire]
)

make_rating_answer_counters(general_question, contribution, [10, 10, 0, 0, 0])
make_rating_answer_counters(dropout_question, contribution, [0, 0, 0, 0, 10])

cache_results(self.evaluation)

calculated_grade = distribution_to_grade(calculate_average_distribution(self.evaluation))
self.assertAlmostEqual(calculated_grade, 1.5)


class TestTextAnswerVisibilityInfo(TestCase):
@classmethod
Expand Down
11 changes: 8 additions & 3 deletions evap/results/tools.py
Original file line number Diff line number Diff line change
Expand Up @@ -194,8 +194,10 @@ def cache_results(evaluation, *, refetch_related_objects=True):
def get_results(evaluation: Evaluation) -> EvaluationResult:
assert evaluation.state in STATES_WITH_RESULTS_CACHING | {Evaluation.State.IN_EVALUATION}

if evaluation.state == Evaluation.State.IN_EVALUATION:
return _get_results_impl(evaluation)
# TODO@remove commenting out
# if evaluation.state == Evaluation.State.IN_EVALUATION:
# return _get_results_impl(evaluation)
return _get_results_impl(evaluation)

cache_key = get_results_cache_key(evaluation)
result = caches["results"].get(cache_key)
Expand Down Expand Up @@ -392,7 +394,10 @@ def calculate_average_distribution(evaluation):
grouped_results = defaultdict(list)
for contribution_result in get_results(evaluation).contribution_results:
for questionnaire_result in contribution_result.questionnaire_results:
grouped_results[contribution_result.contributor].extend(questionnaire_result.question_results)
if (
questionnaire_result.questionnaire.type != Questionnaire.Type.DROPOUT
): # dropout questionnaires are not counted
grouped_results[contribution_result.contributor].extend(questionnaire_result.question_results)

evaluation_results = grouped_results.pop(None, [])

Expand Down
3 changes: 3 additions & 0 deletions evap/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -110,6 +110,9 @@
# Questionnaires automatically added to exam evaluations
EXAM_QUESTIONNAIRE_IDS: list[int] = []

# Questionnaire that is added on top when marking an evaluation as "dropped out"
DROPOUT_QUESTIONNAIRE_ID = 1

### Installation specific settings

# People who get emails on errors.
Expand Down
Loading
Loading