- After several years of planning and development we have released v4 of our APIs.
+ After several years of planning and development, we have released v4 of our APIs.
This upgrade responds to feedback we have received over the years and should be much better for our users — faster, more featureful, more scalable, and more accurate.
- Unfortunately, we couldn't make these new APIs completely backwards compatible so this guide explains what's new.
+ Unfortunately, we couldn't make these new APIs completely backwards compatible, so this guide explains what's new.
Support
@@ -71,10 +71,10 @@
Timeline for Changes
What If I Do Nothing?
- You might be fine. Most of the database and search APIs are only changing slightly and v3 will be supported for some period of time.
+ You might be fine. Most of the database and search APIs are only changing slightly, and v3 will be supported for some period of time.
But you should read this guide to see if any changes are needed to your application.
-
The remainder of this guide is in three section:
+
The remainder of this guide is in three sections:
New features you can expect
How to migrate database APIs
@@ -90,7 +90,7 @@
Cursor-based pagination
ElasticSearch
v4 of the Search API is powered by ElasticSearch instead of Solr. This is a huge upgrade to our API and search engine.
-
Some of the improvements include:
+
Some improvements include:
In v4, all PACER cases are now searchable. In v3 you only got results if a case had a docket entry.
@@ -107,12 +107,12 @@
ElasticSearch
Camelcase words like "McDonalds" are more searchable.
Highlighting is more consistent and can be disabled for better performance.
-
Emojis and unicode characters are now searchable.
+
Emojis and Unicode characters are now searchable.
Docket number and other fielded searches are more robust.
We cannot continue running Solr forever, but we can do our best to support v3 of the API. To do this, on November 25, 2024,
v3 of the Search API will be upgraded to use ElasticSearch. We expect this to support most uses, but it will cause some breaking changes, as outlined in this section.
diff --git a/cl/api/templates/rest-docs-vlatest.html b/cl/api/templates/rest-docs-vlatest.html
index 90bc8a7159..91e7205bcd 100644
--- a/cl/api/templates/rest-docs-vlatest.html
+++ b/cl/api/templates/rest-docs-vlatest.html
@@ -324,6 +324,30 @@
Ordering
Ordering by fields with duplicate values is non-deterministic. If you wish to order by such a field, you should provide a second field as a tie-breaker to consistently order results. For example, ordering by date_filed will not return consistent ordering for items that have the same date, but this can be fixed by ordering by date_filed,id. In that case, if two items have the same date_filed value, the tie will be broken by the id field.
+
Counting
+
To retrieve the total number of items matching your query without fetching all the data, you can use the count=on parameter. This is useful for verifying filters and understanding the scope of your query results without incurring the overhead of retrieving full datasets.
+
You can follow this URL to get the total count of items matching your query.
Field Selection
To save bandwidth and increase serialization performance, fields can be limited by using the fields parameter with a comma-separated list of fields.
diff --git a/cl/api/tests.py b/cl/api/tests.py
index 63a8e14aa5..e523d46a80 100644
--- a/cl/api/tests.py
+++ b/cl/api/tests.py
@@ -1,5 +1,5 @@
import json
-from datetime import date, timedelta
+from datetime import date, datetime, timedelta, timezone
from http import HTTPStatus
from typing import Any, Dict
from unittest import mock
@@ -86,7 +86,7 @@
TagViewSet,
)
from cl.search.factories import CourtFactory, DocketFactory
-from cl.search.models import SOURCES, Docket, Opinion
+from cl.search.models import SOURCES, Court, Docket, Opinion
from cl.stats.models import Event
from cl.tests.cases import SimpleTestCase, TestCase, TransactionTestCase
from cl.tests.utils import MockResponse, make_client
@@ -320,6 +320,56 @@ def test_recap_api_required_filter(self, mock_logging_prefix) -> None:
r = self.client.get(path, {"pacer_doc_id__in": "17711118263,asdf"})
self.assertEqual(r.status_code, HTTPStatus.OK)
+ def test_count_on_query_counts(self, mock_logging_prefix) -> None:
+ """
+ Check that a v4 API request with param `count=on` only performs
+ 2 queries to the database: one to check the authenticated user,
+ and another to select the count.
+ """
+ with CaptureQueriesContext(connection) as ctx:
+ path = reverse("docket-list", kwargs={"version": "v4"})
+ params = {"count": "on"}
+ self.client.get(path, params)
+
+ self.assertEqual(
+ len(ctx.captured_queries),
+ 2,
+ msg=f"{len(ctx.captured_queries)} queries executed, 2 expected",
+ )
+
+ executed_queries = [query["sql"] for query in ctx.captured_queries]
+ expected_queries = [
+ 'FROM "auth_user" WHERE "auth_user"."id" =',
+ 'SELECT COUNT(*) AS "__count"',
+ ]
+ for executed_query, expected_fragment in zip(
+ executed_queries, expected_queries
+ ):
+ self.assertIn(
+ expected_fragment,
+ executed_query,
+ msg=f"Expected query fragment not found: {expected_fragment}",
+ )
+
+ def test_standard_request_no_count_query(
+ self, mock_logging_prefix
+ ) -> None:
+ """
+ Check that a v4 API request without param `count=on` doesn't perform
+ a count query.
+ """
+ with CaptureQueriesContext(connection) as ctx:
+ path = reverse("docket-list", kwargs={"version": "v4"})
+ self.client.get(path)
+
+ executed_queries = [query["sql"] for query in ctx.captured_queries]
+ for sql in executed_queries:
+ self.assertNotIn(
+ 'SELECT COUNT(*) AS "__count"',
+ sql,
+ msg="Unexpected COUNT query found in standard request.",
+ )
+
class ApiEventCreationTestCase(TestCase):
"""Check that events are created properly."""
@@ -484,6 +534,8 @@ def setUpTestData(cls) -> None:
cls.audio_path_v3 = reverse("audio-list", kwargs={"version": "v3"})
cls.audio_path_v4 = reverse("audio-list", kwargs={"version": "v4"})
+ cls.debt_path_v4 = reverse("debt-list", kwargs={"version": "v4"})
+ cls.debt_path_v3 = reverse("debt-list", kwargs={"version": "v3"})
def setUp(self) -> None:
self.r = get_redis_interface("STATS")
@@ -595,6 +647,27 @@ async def test_allow_v4_for_anonymous_users(self, mock_api_prefix) -> None:
response = await self.async_client.get(self.audio_path_v4)
self.assertEqual(response.status_code, HTTPStatus.OK)
+ async def test_confirm_v4_post_requests_are_not_allowed(
+ self, mock_api_prefix
+ ) -> None:
+ """Confirm V4 users are not allowed to POST requests."""
+ response = await self.client_2.post(self.debt_path_v4, {})
+ self.assertEqual(response.status_code, HTTPStatus.FORBIDDEN)
+
+ async def test_confirm_v3_post_requests_are_not_allowed(
+ self, mock_api_prefix
+ ) -> None:
+ """Confirm V3 users are not allowed to POST requests."""
+ response = await self.client_2.post(self.debt_path_v3, {})
+ self.assertEqual(response.status_code, HTTPStatus.FORBIDDEN)
+
+ async def test_confirm_anonymous_post_requests_are_not_allowed(
+ self, mock_api_prefix
+ ) -> None:
+ """Confirm anonymous users are not allowed to POST requests."""
+ response = await self.async_client.post(self.debt_path_v4, {})
+ self.assertEqual(response.status_code, HTTPStatus.UNAUTHORIZED)
+
class DRFOrderingTests(TestCase):
"""Does ordering work generally and specifically?"""
@@ -643,12 +716,203 @@ async def assertCountInResults(self, expected_count):
f"the JSON: \n{r.json()}",
)
got = len(r.data["results"])
+ try:
+ path = r.request.get("path")
+ query_string = r.request.get("query_string")
+ url = f"{path}?{query_string}"
+ except AttributeError:
+ url = self.path
self.assertEqual(
got,
expected_count,
- msg=f"Expected {expected_count}, but got {got}.\n\nr.data was: {r.data}",
+ msg=f"Expected {expected_count}, but got {got} in {url}\n\nr.data was: {r.data}",
+ )
+ return r
+
+
+class DRFCourtApiFilterTests(TestCase, FilteringCountTestCase):
+ @classmethod
+ def setUpTestData(cls):
+ Court.objects.all().delete()
+
+ cls.parent_court = CourtFactory(
+ id="parent1",
+ full_name="Parent Court",
+ short_name="PC",
+ citation_string="PC",
+ in_use=True,
+ has_opinion_scraper=True,
+ has_oral_argument_scraper=False,
+ position=1,
+ start_date=date(2000, 1, 1),
+ end_date=None,
+ jurisdiction=Court.FEDERAL_APPELLATE,
+ date_modified=datetime(2021, 1, 1, tzinfo=timezone.utc),
+ )
+
+ cls.child_court1 = CourtFactory(
+ id="child1",
+ parent_court=cls.parent_court,
+ full_name="Child Court 1",
+ short_name="CC1",
+ citation_string="CC1",
+ in_use=False,
+ has_opinion_scraper=False,
+ has_oral_argument_scraper=True,
+ position=2,
+ start_date=date(2010, 6, 15),
+ end_date=date(2020, 12, 31),
+ jurisdiction=Court.STATE_SUPREME,
+ date_modified=datetime(2022, 6, 15, tzinfo=timezone.utc),
+ )
+ cls.child_court2 = CourtFactory(
+ id="child2",
+ parent_court=cls.parent_court,
+ full_name="Child Court 2",
+ short_name="CC2",
+ citation_string="CC2",
+ in_use=True,
+ has_opinion_scraper=False,
+ has_oral_argument_scraper=False,
+ position=3,
+ start_date=date(2015, 5, 20),
+ end_date=None,
+ jurisdiction=Court.STATE_TRIAL,
+ date_modified=datetime(2023, 3, 10, tzinfo=timezone.utc),
+ )
+
+ cls.orphan_court = CourtFactory(
+ id="orphan",
+ full_name="Orphan Court",
+ short_name="OC",
+ citation_string="OC",
+ in_use=True,
+ has_opinion_scraper=False,
+ has_oral_argument_scraper=False,
+ position=4,
+ start_date=date(2012, 8, 25),
+ end_date=None,
+ jurisdiction=Court.FEDERAL_DISTRICT,
+ date_modified=datetime(2023, 5, 5, tzinfo=timezone.utc),
)
+ @async_to_sync
+ async def setUp(self):
+ self.path = reverse("court-list", kwargs={"version": "v4"})
+ self.q: Dict[str, Any] = {}
+
+ async def test_parent_court_filter(self):
+ """Can we filter courts by parent_court id?"""
+ self.q["parent_court"] = "parent1"
+ # Should return child1 and child2:
+ response = await self.assertCountInResults(2)
+
+ # Verify the returned court IDs
+ court_ids = [court["id"] for court in response.data["results"]]
+ self.assertEqual(set(court_ids), {"child1", "child2"})
+
+ # Filter for courts with parent_court id='orphan' (none should match)
+ self.q = {"parent_court": "orphan"}
+ await self.assertCountInResults(0)
+
+ async def test_no_parent_court_filter(self):
+ """Do we get all courts when using no filters?"""
+ self.q = {}
+ await self.assertCountInResults(4) # Should return all four courts
+
+ async def test_invalid_parent_court_filter(self):
+ """Do we handle invalid parent_court values correctly?"""
+ self.q["parent_court"] = "nonexistent"
+ await self.assertCountInResults(0)
+
+ async def test_id_filter(self):
+ """Can we filter courts by id?"""
+ self.q["id"] = "child1"
+ response = await self.assertCountInResults(1)
+ self.assertEqual(response.data["results"][0]["id"], "child1")
+
+ async def test_in_use_filter(self):
+ """Can we filter courts by in_use field?"""
+ self.q = {"in_use": "true"}
+ await self.assertCountInResults(3) # parent1, child2, orphan
+ self.q = {"in_use": "false"}
+ await self.assertCountInResults(1) # child1
+
+ async def test_has_opinion_scraper_filter(self):
+ """Can we filter courts by has_opinion_scraper field?"""
+ self.q = {"has_opinion_scraper": "true"}
+ await self.assertCountInResults(1) # parent1
+ self.q = {"has_opinion_scraper": "false"}
+ await self.assertCountInResults(3) # child1, child2, orphan
+
+ async def test_has_oral_argument_scraper_filter(self):
+ """Can we filter courts by has_oral_argument_scraper field?"""
+ self.q = {"has_oral_argument_scraper": "true"}
+ await self.assertCountInResults(1) # child1
+ self.q = {"has_oral_argument_scraper": "false"}
+ await self.assertCountInResults(3) # parent1, child2, orphan
+
+ async def test_position_filter(self):
+ """Can we filter courts by position with integer lookups?"""
+ self.q = {"position__gt": "2"}
+ await self.assertCountInResults(2) # child2 (3), orphan (4)
+ self.q = {"position__lte": "2"}
+ await self.assertCountInResults(2) # parent1 (1), child1 (2)
+
+ async def test_start_date_filter(self):
+ """Can we filter courts by start_date with date lookups?"""
+ self.q = {"start_date__year": "2015"}
+ await self.assertCountInResults(1) # child2 (2015-05-20)
+ self.q = {"start_date__gte": "2010-01-01"}
+ await self.assertCountInResults(3) # child1, child2, orphan
+
+ async def test_end_date_filter(self):
+ """Can we filter courts by end_date with date lookups?"""
+ self.q = {"end_date__day": "31"}
+ await self.assertCountInResults(1) # parent1, child2, orphan
+ self.q = {"end_date__year": "2024"}
+ await self.assertCountInResults(0)
+
+ async def test_short_name_filter(self):
+ """Can we filter courts by short_name with text lookups?"""
+ self.q = {"short_name__iexact": "Cc1"}
+ await self.assertCountInResults(1) # child1
+ self.q = {"short_name__icontains": "cc"}
+ await self.assertCountInResults(2) # child1, child2
+
+ async def test_full_name_filter(self):
+ """Can we filter courts by full_name with text lookups?"""
+ self.q = {"full_name__istartswith": "Child"}
+ await self.assertCountInResults(2) # child1, child2
+ self.q = {"full_name__iendswith": "Court"}
+ await self.assertCountInResults(2) # parent1, orphan
+
+ async def test_citation_string_filter(self):
+ """Can we filter courts by citation_string with text lookups?"""
+ self.q = {"citation_string": "OC"}
+ await self.assertCountInResults(1) # orphan
+ self.q = {"citation_string__icontains": "2"}
+ await self.assertCountInResults(1) # child2
+
+ async def test_jurisdiction_filter(self):
+ """Can we filter courts by jurisdiction?"""
+ self.q = {
+ "jurisdiction": [
+ Court.FEDERAL_APPELLATE,
+ Court.FEDERAL_DISTRICT,
+ ]
+ }
+ await self.assertCountInResults(2) # parent1 and orphan
+
+ async def test_combined_filters(self):
+ """Can we filter courts with multiple filters applied?"""
+ self.q = {
+ "in_use": "true",
+ "has_opinion_scraper": "false",
+ "position__gt": "2",
+ }
+ await self.assertCountInResults(2) # child2 and orphan
+
class DRFJudgeApiFilterTests(
SimpleUserDataMixin, TestCase, FilteringCountTestCase
@@ -2561,3 +2825,100 @@ async def test_avoid_logging_not_successful_webhook_events(
self.assertEqual(await webhook_events.acount(), 2)
# Confirm no milestone event should be created.
self.assertEqual(await milestone_events.acount(), 0)
+
+
+class CountParameterTests(TestCase):
+ @classmethod
+ def setUpTestData(cls) -> None:
+ cls.user_1 = UserProfileWithParentsFactory.create(
+ user__username="recap-user",
+ user__password=make_password("password"),
+ )
+ permissions = Permission.objects.filter(
+ codename__in=["has_recap_api_access", "has_recap_upload_access"]
+ )
+ cls.user_1.user.user_permissions.add(*permissions)
+
+ cls.court_canb = CourtFactory(id="canb")
+ cls.court_cand = CourtFactory(id="cand")
+
+ cls.url = reverse("docket-list", kwargs={"version": "v4"})
+
+ for i in range(7):
+ DocketFactory(
+ court=cls.court_canb,
+ source=Docket.RECAP,
+ pacer_case_id=str(100 + i),
+ )
+ for i in range(5):
+ DocketFactory(
+ court=cls.court_cand,
+ source=Docket.HARVARD,
+ pacer_case_id=str(200 + i),
+ )
+
+ def setUp(self):
+ self.client = make_client(self.user_1.user.pk)
+
+ async def test_count_on_returns_only_count(self):
+ """
+ Test that when 'count=on' is specified, the API returns only the count.
+ """
+ params = {"count": "on"}
+ response = await self.client.get(self.url, params)
+
+ self.assertEqual(response.status_code, 200)
+ # The response should only contain the 'count' key
+ self.assertEqual(list(response.data.keys()), ["count"])
+ self.assertIsInstance(response.data["count"], int)
+ # The count should match the total number of dockets
+ expected_count = await Docket.objects.acount()
+ self.assertEqual(response.data["count"], expected_count)
+
+ async def test_standard_response_includes_count_url(self):
+ """
+ Test that the standard response includes a 'count' key with the count URL.
+ """
+ response = await self.client.get(self.url)
+
+ self.assertEqual(response.status_code, 200)
+ self.assertIn("count", response.data)
+ count_url = response.data["count"]
+ self.assertIsInstance(count_url, str)
+ self.assertIn("count=on", count_url)
+
+ async def test_invalid_count_parameter(self):
+ """
+ Test that invalid 'count' parameter values are handled appropriately.
+ """
+ params = {"count": "invalid"}
+ response = await self.client.get(self.url, params)
+
+ self.assertEqual(response.status_code, 200)
+ # The response should be the standard paginated response
+ self.assertIn("results", response.data)
+ self.assertIsInstance(response.data["results"], list)
+
+ async def test_count_with_filters(self):
+ """
+ Test that the count returned matches the filters applied.
+ """
+ params = {"court": "canb", "source": Docket.RECAP, "count": "on"}
+ response = await self.client.get(self.url, params)
+
+ self.assertEqual(response.status_code, 200)
+ expected_count = await Docket.objects.filter(
+ court__id="canb",
+ source=Docket.RECAP,
+ ).acount()
+ self.assertEqual(response.data["count"], expected_count)
+
+ async def test_count_with_no_results(self):
+ """
+ Test that 'count=on' returns zero when no results match the filters.
+ """
+ params = {"court": "cand", "source": Docket.RECAP, "count": "on"}
+ response = await self.client.get(self.url, params)
+
+ self.assertEqual(response.status_code, 200)
+ self.assertEqual(response.data["count"], 0)
diff --git a/cl/api/utils.py b/cl/api/utils.py
index 3068cd4369..45188fbbfe 100644
--- a/cl/api/utils.py
+++ b/cl/api/utils.py
@@ -28,7 +28,12 @@
from rest_framework_filters import FilterSet, RelatedFilter
from rest_framework_filters.backends import RestFrameworkFilterBackend
-from cl.api.models import WEBHOOK_EVENT_STATUS, Webhook, WebhookEvent
+from cl.api.models import (
+ WEBHOOK_EVENT_STATUS,
+ Webhook,
+ WebhookEvent,
+ WebhookVersions,
+)
from cl.citations.utils import filter_out_non_case_law_and_non_valid_citations
from cl.lib.redis_utils import get_redis_interface
from cl.stats.models import Event
@@ -878,12 +883,47 @@ class WebhookKeyType(TypedDict):
deprecation_date: str | None
+def get_webhook_deprecation_date(webhook_deprecation_date: str) -> str:
+ """Convert a webhook deprecation date string to ISO-8601 format with
+ UTC timezone.
+
+ :param webhook_deprecation_date: The deprecation date as a string in
+ "YYYY-MM-DD" format.
+ :return: The ISO-8601 formatted date string with UTC timezone.
+ """
+
+ deprecation_date = (
+ datetime.strptime(webhook_deprecation_date, "%Y-%m-%d")
+ .replace(
+ hour=0, minute=0, second=0, microsecond=0, tzinfo=timezone.utc
+ )
+ .isoformat()
+ )
+ return deprecation_date
+
+
def generate_webhook_key_content(webhook: Webhook) -> WebhookKeyType:
+ """Generate a dictionary representing the content for the webhook key.
+
+ :param webhook: The Webhook instance.
+ :return: A dictionary containing webhook details, event type, version,
+ creation date in ISO format, and deprecation date according webhook version.
+ """
+
+ deprecation_date: str | None = None
+ match webhook.version:
+ case WebhookVersions.v1:
+ deprecation_date = get_webhook_deprecation_date(
+ settings.WEBHOOK_V1_DEPRECATION_DATE # type: ignore
+ )
+ case WebhookVersions.v2:
+ deprecation_date = None
+
return {
"event_type": webhook.event_type,
"version": webhook.version,
"date_created": webhook.date_created.isoformat(),
- "deprecation_date": None,
+ "deprecation_date": deprecation_date,
}
diff --git a/cl/api/webhooks.py b/cl/api/webhooks.py
index f6ca97d9e3..15f1d3cabf 100644
--- a/cl/api/webhooks.py
+++ b/cl/api/webhooks.py
@@ -13,7 +13,12 @@
)
from cl.alerts.models import Alert
from cl.alerts.utils import OldAlertReport
-from cl.api.models import Webhook, WebhookEvent, WebhookEventType
+from cl.api.models import (
+ Webhook,
+ WebhookEvent,
+ WebhookEventType,
+ WebhookVersions,
+)
from cl.api.utils import (
generate_webhook_key_content,
update_webhook_event_after_request,
@@ -23,6 +28,7 @@
from cl.recap.api_serializers import PacerFetchQueueSerializer
from cl.recap.models import PROCESSING_STATUS, PacerFetchQueue
from cl.search.api_serializers import (
+ OpinionClusterWebhookResultSerializer,
SearchResultSerializer,
V3OpinionESResultSerializer,
)
@@ -192,10 +198,17 @@ def send_search_alert_webhook(
).data
else:
# ES results serialization
- serialized_results = V3OpinionESResultSerializer(
- results,
- many=True,
- ).data
+ match webhook.version:
+ case WebhookVersions.v1:
+ serialized_results = V3OpinionESResultSerializer(
+ results,
+ many=True,
+ ).data
+ case WebhookVersions.v2:
+ serialized_results = OpinionClusterWebhookResultSerializer(
+ results,
+ many=True,
+ ).data
post_content = {
"webhook": generate_webhook_key_content(webhook),
diff --git a/cl/assets/static-global/css/opinions.css b/cl/assets/static-global/css/opinions.css
new file mode 100644
index 0000000000..ff1b0200d3
--- /dev/null
+++ b/cl/assets/static-global/css/opinions.css
@@ -0,0 +1,721 @@
+
+
+/*Wrap all our changes around an opinion-body class we load up
+ in the opinion template*/
+
+.opinion-body {
+
+ .harvard > * {
+ font-family: Merriweather, "Times New Roman", Times, serif;
+ font-size: 15px;
+ letter-spacing: 0.2px;
+ text-align: justify;
+ padding:0px;
+ margin: 0px;
+ background-color: white;
+ border: none;
+ line-height: 2.3em;
+ }
+
+ #headmatter > parties {
+ text-align: center;
+ font-style: initial;
+ font-size: 2em;
+ display: block;
+ }
+ #headmatter > div.footnotes > .footnote > p {
+ line-height: 1em;
+ }
+
+ #headmatter > * {
+ text-indent: 2em;
+ }
+
+ #headmatter docketnumber,
+ #headmatter court,
+ #headmatter parties,
+ #headmatter attorneys,
+ #headmatter syllabus,
+ #headmatter decisiondate {
+ display: block;
+ }
+
+ #headmatter > div.footnotes {
+ border-top: None;
+ padding-top: 1em;
+ }
+
+ .jump-links > a{
+ position: relative;
+ margin: -8px 20px 0 0;
+ width: 140px;
+ line-height: 18px;
+ font-size: 14px;
+ cursor: pointer;
+ white-space: nowrap;
+ text-overflow: ellipsis;
+ opacity: 1;
+ }
+
+ .hr-opinion {
+ border-top: 2px solid black;
+ }
+
+ /*Clean up the Case Caption section to look large and clean*/
+ .case-caption {
+ font-size: 3em;
+ font-weight: 500;
+ text-align: left;
+ line-height: 1.1em;
+ margin-top: 50px;
+ }
+
+
+ .case-court {
+ font-size: 25px;
+ text-align: left;
+ }
+
+/*Update sidebar jump links to look nice*/
+.jump-links {
+ font-size: 12px;
+ padding-top: 5px;
+}
+
+ li.jump-links.active {
+ color: #B53C2C;
+ font-weight: bold;
+ }
+
+ li.jump-links {
+ list-style-type: none;
+ padding-left: 0;
+ }
+
+ li.jump-links::before {
+ content: "";
+ border-left: 3px solid lightgrey;
+ height: 1em;
+ padding-right: 8px;
+ display: inline-block;
+ margin-right: 5px;
+ }
+
+ li.jump-links.active::before {
+ content: "";
+ border-left: 2px solid #B53C2C;
+ padding-right: 8px;
+ display: inline-block;
+ margin-right: 5px;
+ }
+
+
+ .jump-links {
+ font-size: 12px;
+ padding-top: 5px;
+}
+
+li.jump-links {
+ height:2.5em;
+ list-style-type: none;
+ padding-left: 0;
+ position: relative;
+}
+
+li.jump-links::before {
+ content: "";
+ border-left: 2px solid lightgrey;
+ height: 100%;
+ position: absolute;
+ left: 0;
+ top: 0;
+ padding-right: 8px;
+ display: inline-block;
+}
+
+/* Active link styles */
+li.jump-links > a.active {
+ font-weight: 500;
+ color: black;
+}
+
+li.jump-links > a {
+ padding-left:10px;
+ color: black;
+}
+
+
+div.footnote:first-of-type {
+ border-top: 1px solid black;
+ width: 100%;
+ display: block;
+ }
+
+ /*Columbia specific Fix*/
+ /*Columbia/HTML Law box special footnotes data almost awlays starts with fn1*/
+ footnote_body sup#fn1 {
+ padding-top: 10px;
+ border-top: 1px solid black;
+ width: 100%;
+ display: block;
+ }
+
+ /*HTML law box page numbers*/
+ strong[data-ref] {
+ font-size: 0.8em;
+ fon: italic;
+ }
+
+ strong[data-ref]::before {
+ content: attr(data-ref);
+ display: inline;
+ position: relative;
+ float: right;
+ left: -.5em;
+ font-size: 0.8em;
+ color: dimgray;
+ width: 0;
+ }
+
+
+ div.footnote {
+ padding-top: 10px;
+ display: block;
+ line-height: 1em;
+ }
+
+ div.footnote > p {
+ display: inline;
+ }
+
+ div.footnote::before {
+ content: attr(label) " ";
+ font-weight: bold;
+ color: #000;
+ margin-right: 5px;
+ padding-top: 2em;
+ }
+
+ div.footnote {
+ padding-top: 10px;
+ font-size: 12px;
+ }
+
+ div.footnote > * {
+ padding-top: 10px;
+ font-size: 12px;
+ }
+
+
+ /*To help separate footnotes from opinion document*/
+ footnote:first-of-type {
+ border-top: 1px solid black;
+ width: 100%;
+ display: block;
+ }
+
+ footnote {
+ padding-top: 10px;
+ display: block;
+ line-height: 1.5em;
+ /*margin-left: 1em;*/
+ padding-left: 40px;
+ }
+
+ footnote > p {
+ display: inline;
+ }
+
+ footnote::before {
+ content: attr(label);
+ font-weight: bold;
+ color: #000;
+ margin-right: 26px;
+ padding-top: 2em;
+ margin-left: -35px;
+ }
+
+ /*Handle CSS in Columbia opinions*/
+ footnotemark {
+ font-weight: bold;
+ font-size: 0.8em;
+ vertical-align: super;
+ line-height: 0;
+ }
+
+
+ #cited-by {
+ z-index: 1;
+ }
+
+ footnotemark {
+ cursor: pointer;
+ color: blue;
+ text-decoration: underline;
+ }
+
+ footnote {
+ padding-top: 10px;
+ font-size: 12px;
+ }
+
+
+ .jumpback {
+ color: blue;
+ cursor: pointer;
+ font-weight: bold;
+ margin-left: 5px;
+ }
+
+ /*Jump backs are empty in resource.org documents for now*/
+ #resource-org-text .jumpback {
+ display: none;
+ }
+
+
+ footnote > * {
+ font-size: 12px;
+ }
+
+ author > page-number {
+ display: block;
+ font-size: 15px;
+ }
+
+ author {
+ display: inline;
+ margin: 0; /* Remove any default margin */
+ text-indent: 2em; /* Indents the first line by 2em */
+ }
+
+ /*Important for indenting harvard opinions correctly*/
+ opinion > p[id^="b"] {
+ text-indent: 2em;
+ }
+
+
+ opinion > [id^="p-"] {
+ padding-left: 2em;
+ text-indent: 2em;
+ }
+}
+
+[id^="A"] {
+ text-indent: 2em;
+ display: inline;
+
+}
+
+.opinion-body {
+ /*I think i did this but i dont know why so im leaving it for now*/
+ /*.tab-pane {*/
+ /* display: none; */
+ /*}*/
+
+ .tab-pane.active {
+ display: block;
+ }
+
+ @media (min-width: 767px) {
+
+ #sidebar {
+ display: flex;
+ flex-direction: column;
+ height: 100vh;
+ justify-content: space-between; /* Push content apart */
+ padding: 20px;
+ padding-top: 3px;
+ overflow-y: auto;
+ position: -webkit-sticky; /* For Safari */
+ position: sticky;
+ top: 0; /* Stick to the top of the viewport */
+
+ }
+ }
+
+ @media (min-width: 100px) {
+ #sidebar {
+ height: auto;
+ }
+ }
+
+ .sidebar-bottom {
+ margin-top: auto;
+ }
+
+ .support-flp, .sponsored-by {
+ margin-bottom: 20px;
+ text-align: center;
+ }
+
+ #opinion > article > * > p {
+ text-indent: 2em;
+ }
+
+ .active > a {
+ border-bottom-color: #B53C2C;
+ }
+
+ #opinion p {
+ text-indent: 2em;
+ }
+
+
+ .nav-pills > li > a {
+ padding: 1px 15px;
+ }
+
+ blockquote > * {
+ text-indent: 0em;
+ }
+
+ sup {
+ font-size: .9em;
+ }
+
+ .main-document {
+ padding-bottom: 5em;
+ }
+
+ /*Case Caption CSS*/
+ #caption-square {
+ background-color: #F6F2EE;
+ margin-left: -15px;
+ margin-right: -15px;
+ margin-top: -20px;
+ }
+
+ #caption-square > ul > li {
+ background-color: #fcfaf9;
+ border-top-right-radius: 5px 5px; /* Rounds the corners */
+ border-top-left-radius: 5px 5px; /* Rounds the corners */
+ margin-left: 4px;
+ }
+
+ #caption-square > ul > li.active {
+ background-color: #ffffff;
+ border-bottom: 1px solid lightgrey;
+ }
+
+ #caption-square > ul > li.active {
+ background-color: #ffffff;
+ border-bottom: 1px solid white;
+ }
+
+ #caption-square > ul > li.active > a {
+ border: 1px solid white;
+ }
+
+ /*Opinion Date File*/
+
+ .case-date-new {
+ border: 1px solid #B53C2C;
+ padding: 0px 10px;
+ border-radius: 20px; /* Rounds the corners */
+ color: #B53C2C;
+ }
+
+
+
+ /*Buttons on Top of Page*/
+ .add-a-note {
+ margin-left: 5px;
+ border: 1px solid black;
+ border-radius: 10px;
+ padding-left: 8px;
+ padding-right: 8px;
+ }
+
+ .add-citation-alert {
+ border: 1px solid black;
+ border-radius: 10px;
+ padding-left: 8px;
+ padding-right: 8px;
+ }
+
+ cross_reference {
+ font-style: italic;
+ }
+
+ #opinion-caption {
+ margin-top: 20px;
+ font-family: Merriweather, "Times New Roman", Times, serif;
+ font-size: 15px;
+ letter-spacing: 0.2px;
+ line-height: 2.3em;
+ margin-bottom: 20px;
+ padding-left: 20px;
+ padding-top: 10px;
+ padding-right: 10px;
+ }
+
+ .case-details {
+ font-size: 16px;
+ }
+
+ .case-details li {
+ line-height: 1.5em;
+ }
+
+ span.citation.no-link {
+ font-style: italic;
+ }
+
+ .opinion-button-row {
+ padding-top: 40px;
+ }
+
+ #download-original {
+ color: black;
+ border-color: black;
+ background-color: white;
+ }
+
+
+ #btn-group-download-original {
+ float:right;
+ margin-top: 0px;
+ margin-left:10px;
+ padding-right: 10px;
+ }
+ #add-note-button {
+ color: black;
+ border-color: black;
+ background-color: white;
+ }
+
+ .top-row {
+ height: 32px;
+ line-height:28px
+ }
+
+ .action-buttons{
+ display: flex;
+ column-gap: 5px;
+ }
+
+ #get-citation-btn-group {
+ float:right;
+ }
+
+ #get-citation-btn-group > a {
+
+ color: black;
+ border-color: black;
+ background-color: white;
+ vertical-align: top;
+ }
+
+
+ p > span.star-pagination::after {
+ display: inline;
+ position: relative;
+ content: attr(label);;
+ float: left;
+ left: -4.5em;
+ font-size: 1em;
+ color: dimgray;
+ width: 0;
+ }
+
+ div > span.star-pagination::after {
+ display: inline;
+ position: relative;
+ content: attr(label);;
+ float: left;
+ left: -2.5em;
+ font-size: 1em;
+ color: dimgray;
+ width: 0;
+ }
+
+ div.subopinion-content > .harvard {
+ font-family: Merriweather, "Times New Roman", Times, serif;
+ font-size: 15px;
+ letter-spacing: 0.2px;
+ line-height: 2.3em;
+ text-align: justify;
+ }
+
+ #columbia-text {
+ font-family: Merriweather, "Times New Roman", Times, serif;
+ font-size: 15px;
+ letter-spacing: 0.2px;
+ line-height: 2.3em;
+ text-align: justify;
+ }
+
+ #columbia-text > div.subopinion-content > div > p > span.star-pagination {
+ color: #555555;
+ }
+
+ #columbia-text > div.subopinion-content > div > p > span.star-pagination::after {
+ display: inline;
+ position: relative;
+ content: attr(label);;
+ float: left;
+ left: -4.5em;
+ font-size: 1em;
+ color: dimgray;
+ width: 0;
+ }
+
+
+ page-number::after {
+ display: inline;
+ position: relative;
+ content: attr(label);
+ float: right;
+ font-size: 1em;
+ color: dimgray;
+ width: 0;
+ }
+
+ page-number {
+ font-style: italic;
+ font-size: 0.8em;
+ margin-right: 4px;
+ margin-left: 2px;
+ }
+
+ page-label {
+ font-style: italic;
+ font-size: 0.8em;
+ margin-right: 4px;
+ margin-left: 2px;
+ }
+
+ page-label {
+ cursor: pointer;
+ }
+
+ page-label:hover {
+ color: darkblue;
+ text-decoration: underline; /* Example hover styling */
+ }
+
+ page-label::after {
+ display: inline;
+ position: relative;
+ content: attr(data-label);
+ float: right;
+ font-size: 1em;
+ color: dimgray;
+ width: 0;
+ }
+
+ a.page-label {
+ font-style: italic;
+ font-size: 0.8em;
+ margin-right: 4px;
+ margin-left: 2px;
+ color: #555555;
+ }
+
+
+ a.page-label::after {
+ display: inline;
+ position: relative;
+ content: attr(data-label);
+ float: right;
+ font-size: 1em;
+ color: dimgray;
+ width: 0;
+ }
+
+ footnote > blockquote > a.page-label::after {
+ right: -1.0em;
+ }
+
+ blockquote[id^="A"] > a.page-label::after {
+ right: -1.0em;
+ }
+
+ blockquote[id^="b"] > a.page-label::after {
+ right: -1.0em;
+ }
+
+ opinion > a.page-label::after {
+ right: -1.0em;
+ text-indent: 0;
+ }
+
+ .harvard a.page-label::after {
+ right: -1.0em;
+ text-indent: 0;
+ position: absolute;
+ }
+
+ /* Adjust to move the entire blockquote to the right */
+ blockquote {
+ margin-left: 3em;
+ display: block;
+ }
+
+ div.counsel > a.page-label::after {
+ right: -1.0em;
+ }
+
+ footnote > p > a.page-label::after {
+ display: none;
+ }
+
+ footnote > blockquote > a.page-label::after {
+ display: none;
+ }
+
+ /*Remove the header on the opinion page so its flush*/
+ header {
+ margin-bottom: 0px;
+ }
+
+ .harvard > opinion > author {
+ line-height: inherit;
+ font-size: inherit;
+ display: inline-block;
+ }
+
+ .container > .content {
+ margin-bottom: 0em;
+ }
+
+ .meta-data-header {
+ font-size:15px;
+ }
+
+ .case-details {
+ font-family: Merriweather, "Times New Roman", Times, serif;
+ letter-spacing: 0.2px;
+ line-height:2.3em;
+ }
+
+ .opinion-section-title {
+ margin-top: 50px;
+ font-family: Merriweather, "Times New Roman", Times, serif;
+ }
+
+ /*Add style to align roman numerals */
+ .center-header {
+ text-align: center;
+ font-size: 2em;
+ }
+
+ /*If XS screen - remove the side page labels*/
+ @media (max-width: 768px) {
+ a.page-label::after {
+ display: none;
+ }
+ a.page-number::after {
+ display: none;
+ }
+ }
+
+ .scraped-html p {
+ display: block;
+ text-indent: 1em;
+ }
+}
+
+html {
+ scroll-behavior: smooth;
+}
\ No newline at end of file
diff --git a/cl/assets/static-global/css/override.css b/cl/assets/static-global/css/override.css
index 02f4be2062..ce071745d2 100644
--- a/cl/assets/static-global/css/override.css
+++ b/cl/assets/static-global/css/override.css
@@ -155,7 +155,30 @@ header {
/* Standard target color. */
*:target {
- background-color: lightyellow;
+ -webkit-animation: target-fade 3s;
+ -moz-animation: target-fade 3s;
+ -o-animation: target-fade 3s;
+ animation: target-fade 3s;
+}
+
+@-webkit-keyframes target-fade {
+ from { background-color: lightyellow; }
+ to { background-color: transparent; }
+}
+
+@-moz-keyframes target-fade {
+ from { background-color: lightyellow; }
+ to { background-color: transparent; }
+}
+
+@-o-keyframes target-fade {
+ from { background-color: lightyellow; }
+ to { background-color: transparent; }
+}
+
+@keyframes target-fade {
+ from { background-color: lightyellow; }
+ to { background-color: transparent; }
}
.alt {
@@ -1008,17 +1031,9 @@ closely the content in the book*/
#headmatter > .footnotes > .footnote > a {
color: #000099;
- position: absolute;
font-size: 1em;
}
-#headmatter {
- border: 1px rgba(210, 210, 210, 0.55) solid;
- padding: 10px;
- background: rgba(232, 232, 232, 0.37);
- margin: 10px;
-}
-
#headmatter > attorneys, docketnumbers, judges, footnotes, court, decisiondate {
line-height: 2em;
font-size: 14px;
@@ -1607,7 +1622,7 @@ textarea {
/* Prevent images inside opinion from overflowing */
-#opinion-content img {
+div.subopinion-content img {
max-width: 100%;
height: auto;
}
diff --git a/cl/assets/static-global/js/base.js b/cl/assets/static-global/js/base.js
index 99355aa207..9d69f158bb 100644
--- a/cl/assets/static-global/js/base.js
+++ b/cl/assets/static-global/js/base.js
@@ -307,11 +307,8 @@ $(document).ready(function () {
if (modal_exist) {
$('#open-modal-on-load').modal();
}
-
});
-
-
// Debounce - rate limit a function
// https://davidwalsh.name/javascript-debounce-function
function debounce(func, wait, immediate) {
@@ -369,3 +366,4 @@ if (form && button) {
button.disabled = true;
});
}
+
diff --git a/cl/assets/static-global/js/opinions.js b/cl/assets/static-global/js/opinions.js
new file mode 100644
index 0000000000..984999c239
--- /dev/null
+++ b/cl/assets/static-global/js/opinions.js
@@ -0,0 +1,291 @@
+
+////////////////
+// Pagination //
+////////////////
+
+// Star pagination weirdness for ANON 2020 dataset -
+
+$('.star-pagination').each(function (index, element) {
+ if ($(this).attr('pagescheme')) {
+ // For ANON 2020 this has two sets of numbers but only one can be
+ // verified with other databses so only showing one
+ var number = $(this).attr('number');
+ if (number.indexOf('P') > -1) {
+ $(this).attr('label', '');
+ } else {
+ $(this).attr('label', number);
+ }
+ } else {
+ $(this).attr('label', this.textContent.trim().replace('*Page ', ''));
+ }
+});
+
+// Systematize page numbers
+$('page-number').each(function (index, element) {
+ // Get the label and citation index from the current element
+ const label = $(this).attr('label');
+ const citationIndex = $(this).attr('citation-index');
+
+ // Clean up the label (remove '*') and use it for the new href and id
+ const cleanLabel = label.replace('*', '').trim();
+
+ // Create the new element
+ const $newAnchor = $('')
+ .addClass('page-label')
+ .attr('data-citation-index', citationIndex)
+ .attr('data-label', cleanLabel)
+ .attr('href', '#' + cleanLabel)
+ .attr('id', cleanLabel)
+ .text('*' + cleanLabel);
+
+ // Replace the element with the new element
+ $(this).replaceWith($newAnchor);
+});
+
+// Systematize page numbers
+$('span.star-pagination').each(function (index, element) {
+ // Get the label and citation index from the current element
+ const label = $(this).attr('label');
+ const citationIndex = $(this).attr('citation-index');
+
+ // Clean up the label (remove '*') and use it for the new href and id
+ const cleanLabel = label.replace('*', '').trim();
+
+ // Create the new element
+ const $newAnchor = $('')
+ .addClass('page-label')
+ .attr('data-citation-index', citationIndex)
+ .attr('data-label', cleanLabel)
+ .attr('href', '#' + cleanLabel)
+ .attr('id', cleanLabel)
+ .text('*' + cleanLabel);
+
+ // Replace the element with the new element
+ $(this).replaceWith($newAnchor);
+});
+// Fix weird data-ref bug
+document.querySelectorAll('strong').forEach((el) => {
+ if (/\[\d+\]/.test(el.textContent)) {
+ // Check if the text matches the pattern [XXX]
+ const match = el.textContent.match(/\[\d+\]/)[0]; // Get the matched pattern
+ el.setAttribute('data-ref', match); // Set a data-ref attribute
+ }
+});
+
+///////////////
+// Footnotes //
+///////////////
+
+
+
+
+// We formatted the harvard footnotes oddly when they appeared inside the pre-opinion content.
+// this removes the excess a tags and allows us to standardize footnotes across our contents
+// footnote cleanup in harvard
+// Update and modify footnotes to enable linking
+
+ // This is needed for variations in resource.org footnotes
+// This is needed for variations in resource.org footnotes
+$('.footnotes > .footnote').each(function () {
+ var $this = $(this);
+ var newElement = $(''); // Create a new element
+
+ // Copy attributes and content from the original element
+ $.each(this.attributes, function (_, attr) {
+ newElement.attr(attr.name, attr.value);
+ });
+ newElement.html($this.html()); // Copy the inner content
+ $this.replaceWith(newElement); // Replace the original
with
+});
+
+
+$('div.footnote > a').remove();
+const headfootnotemarks = $('a.footnote');
+const divfootnotes = $('div.footnote');
+
+if (headfootnotemarks.length === divfootnotes.length) {
+ headfootnotemarks.each(function (index) {
+ const footnoteMark = $(this);
+ const footnote = divfootnotes.eq(index);
+
+ const $newElement = $('');
+ $.each(footnoteMark.attributes, function () {
+ if (footnoteMark.specified) {
+ $newElement.attr(footnoteMark.name, footnoteMark.value);
+ }
+ });
+ $newElement.html(footnoteMark.html());
+ footnoteMark.replaceWith($newElement);
+
+ const $newFootnote = $('');
+ $.each(footnote.attributes, function () {
+ if (footnote.specified) {
+ $newFootnote.attr(footnote.name, footnote.value);
+ }
+ });
+ $newFootnote.attr('label', footnote.attr('label'));
+ $newFootnote.html(footnote.html());
+ footnote.replaceWith($newFootnote);
+ });
+}
+
+// This fixes many of the harvard footnotes so that they can
+// easily link back and forth - we have a second set
+// of harvard footnotes inside headnotes that need to be parsed out now
+// okay.
+
+const footnoteMarks = $('footnotemark');
+const footnotes = $('footnote').not('[orphan="true"]');
+
+if (footnoteMarks.length === footnotes.length) {
+ // we can make this work
+ footnoteMarks.each(function (index) {
+ const footnoteMark = $(this);
+ const $newElement = $('');
+ // Copy attributes from the old element
+ $.each(footnoteMark.attributes, function () {
+ if (footnoteMark.specified) {
+ $newElement.attr(footnoteMark.name, footnoteMark.value);
+ }
+ });
+ $newElement.html(footnoteMark.html());
+ const $supElement = $('').append($newElement);
+ footnoteMark.replaceWith($supElement);
+ const footnote = footnotes.eq(index);
+ $newElement.attr('href', `#fn${index}`);
+ $newElement.attr('id', `fnref${index}`);
+ footnote.attr('id', `fn${index}`);
+
+ const $jumpback = $('↵');
+ $jumpback.attr('href', `#fnref${index}`);
+
+ footnote.append($jumpback);
+ });
+} else {
+ // If the number of footnotes and footnotemarks are inconsistent use the method to scroll to the nearest one
+ // we dont use this by default because many older opinions will reuse * ^ and other icons repeatedly on every page
+ // and so label is no usable to identify the correct footnote.
+
+ footnotes.each(function (index) {
+ const $jumpback = $('↵');
+ $jumpback.attr('label', $(this).attr('label'));
+ $(this).append($jumpback);
+ });
+
+ // There is no silver bullet for footnotes
+ $('footnotemark').on('click', function () {
+ const markText = $(this).text().trim(); // Get the text of the clicked footnotemark
+ const currentScrollPosition = $(window).scrollTop(); // Get the current scroll position
+
+ // Find the first matching footnote below the current scroll position
+ const targetFootnote = $('footnote')
+ .filter(function () {
+ return $(this).attr('label') === markText && $(this).offset().top > currentScrollPosition;
+ })
+ .first();
+
+ // If a matching footnote is found, scroll to it
+ if (targetFootnote.length > 0) {
+ $('html, body').animate(
+ {
+ scrollTop: targetFootnote.offset().top,
+ },
+ 500
+ ); // Adjust the animation duration as needed
+ } else {
+ // console.warn('No matching footnote found below the current position for:', markText);
+ }
+ });
+
+
+ //////////////
+ // Sidebar //
+ /////////////
+
+ $('.jumpback').on('click', function () {
+ const footnoteLabel = $(this).attr('label').trim(); // Get the label attribute of the clicked footnote
+ const currentScrollPosition = $(window).scrollTop(); // Get the current scroll position
+
+ // Find the first matching footnotemark above the current scroll position
+ const targetFootnotemark = $('footnotemark')
+ .filter(function () {
+ return $(this).text().trim() === footnoteLabel && $(this).offset().top < currentScrollPosition;
+ })
+ .last();
+
+ // If a matching footnotemark is found, scroll to it
+ if (targetFootnotemark.length > 0) {
+ $('html, body').animate(
+ {
+ scrollTop: targetFootnotemark.offset().top,
+ },
+ 500
+ ); // Adjust the animation duration as needed
+ } else {
+ // console.warn('No matching footnotemark found above the current position for label:', footnoteLabel);
+ }
+ });
+}
+
+$(document).ready(function () {
+ function adjustSidebarHeight() {
+ if ($(window).width() > 767) {
+ // Only apply the height adjustment for screens wider than 767px
+ var scrollTop = $(window).scrollTop();
+ if (scrollTop <= 175) {
+ $('.opinion-sidebar').css('height', 'calc(100vh - ' + (175 - scrollTop) + 'px)');
+ // $('.main-document').css('height', 'calc(100vh + ' + (scrollTop) + 'px)');
+ } else {
+ $('.opinion-sidebar').css('height', '100vh');
+ }
+ } else {
+ $('.opinion-sidebar').css('height', 'auto'); // Reset height for mobile view
+ }
+ }
+
+ // Adjust height on document ready and when window is scrolled or resized
+ adjustSidebarHeight();
+ $(window).on('scroll resize', adjustSidebarHeight);
+});
+
+// Update sidebar to show where we are on the page
+document.addEventListener('scroll', function () {
+ let sections = document.querySelectorAll('.jump-link');
+ let currentSection = '';
+
+ // Determine which section is currently in view
+ sections.forEach((section) => {
+ let sectionTop = section.offsetTop;
+ let sectionHeight = section.offsetHeight;
+ if (window.scrollY >= sectionTop - sectionHeight / 3) {
+ currentSection = section.getAttribute('id');
+ }
+ });
+ if (!currentSection) currentSection = 'top';
+ // Remove the active class from links and their parent elements
+ let links = document.querySelectorAll('.jump-links > a.active');
+ links.forEach((link) => {
+ link.classList.remove('active');
+ if (link.parentElement) {
+ link.parentElement.classList.remove('active');
+ }
+ });
+
+ // Add the active class to the link and its parent that corresponds to the current section
+ let activeLink = document.getElementById(`nav_${currentSection}`);
+ if (!activeLink) return;
+
+ activeLink.classList.add('active');
+ if (activeLink.parentElement) {
+ activeLink.parentElement.classList.add('active');
+ }
+});
+
+document.querySelectorAll("page-label").forEach(label => {
+ label.addEventListener("click", function() {
+ const href = this.getAttribute("href");
+ if (href) {
+ window.location.href = href;
+ }
+ });
+});
diff --git a/cl/assets/static-global/js/webhooks-page.js b/cl/assets/static-global/js/webhooks-page.js
index bad5cfdd82..22fe9184e4 100644
--- a/cl/assets/static-global/js/webhooks-page.js
+++ b/cl/assets/static-global/js/webhooks-page.js
@@ -20,10 +20,12 @@ htmx.on('htmx:afterSwap', (e) => {
let webhook_form = document.getElementById('webhooks-body');
if (e.detail.target.id === 'webhooks-body') {
// If the user already have a webhook configured for each type of event, show a message.
- let event_type_options = document.getElementById('id_event_type').options.length;
+ let event_type_options = Array.from(document.getElementById('id_event_type').options)
+ .filter(option => option.value !== "") //Filter out default option
+ .length;
if (event_type_options === 0) {
webhook_form.innerHTML =
- "You already have a webhook configured for each type of event. Please delete one before making another.";
+ "You already have a webhook configured for each type of event and version available. Please delete one before making another.";
}
//Toggle form modal
$('#webhook-modal').modal('toggle');
diff --git a/cl/assets/templates/base.html b/cl/assets/templates/base.html
index be91672e34..0cc58c47f3 100644
--- a/cl/assets/templates/base.html
+++ b/cl/assets/templates/base.html
@@ -84,7 +84,7 @@
You did not supply the "private" variable to your template.
{% if FUNDRAISING_MODE %}
- {% include 'includes/dismissible_nav_banner.html' with link="https://free.law/2024/01/18/new-recap-archive-search-is-live" text="A year in the making, today we are launching a huge new search engine for the RECAP Archive" emoji="🎁" cookie_name="no_banner"%}
+ {% include 'includes/dismissible_nav_banner.html' with link="https://donate.free.law/forms/givingtuesday" text="Today is GivingTuesday. Your support of Free Law Project helps make the justice system more transparent and accessible to all." cookie_name="giving_tuesday" button_text="Donate Today!"%}
{% endif %}
diff --git a/cl/assets/templates/includes/dismissible_nav_banner.html b/cl/assets/templates/includes/dismissible_nav_banner.html
index 501e33c4a5..c1f3480830 100644
--- a/cl/assets/templates/includes/dismissible_nav_banner.html
+++ b/cl/assets/templates/includes/dismissible_nav_banner.html
@@ -3,12 +3,16 @@
available and takes up to four keyword arguments described below:
Parameters:
- link: The URL for the "Learn More" button.
- text: Text of the banner.
- cookie_name: Name of the cookie used to remember if the user has already dismissed
- the banner. This prevents them from seeing the same message repeatedly.
- emoji: Insert an emoji next to your banner message using its decimal HTML entity
- code (like 👍).
+ - text: Text of the banner.
+ - link: The URL for the button.
+ - cookie_name: Name of the cookie used to remember if the user has already
+ dismissed the banner. This prevents them from seeing the same message
+ repeatedly.
+ - button_text (optional): Text for the button. Defaults to "Learn More".
+ - button_emoji (optional): An Idiomatic Text element () to display
+ inside the button.
+ - emoji (optional): An HTML entity code (e.g., 👍) to insert an
+ emoji next to the banner message.
It's advisable to wrap this template within an if tag and use the parent element to add
extra conditions to handle the visibility of the banner. The current template only checks
@@ -36,14 +40,14 @@
-
+
{% if emoji %}{{emoji}}{% endif %} {{text}}
-
+
Learn More
+ class="btn btn-primary btn-lg hidden-xs">{% if button_emoji %}{{button_emoji}}{% endif %} {% if button_text %}{{button_text}}{% else %}Learn More{% endif %}
Learn More
+ class="btn btn-primary btn-sm hidden-sm hidden-md hidden-lg">{% if button_emoji %}{{button_emoji}}{% endif %} {% if button_text %}{{button_text}}{% else %}Learn More{% endif %}
diff --git a/cl/audio/api_views.py b/cl/audio/api_views.py
index a444db4a98..fa6d518ec9 100644
--- a/cl/audio/api_views.py
+++ b/cl/audio/api_views.py
@@ -1,4 +1,5 @@
from rest_framework import viewsets
+from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
from cl.api.api_permissions import V3APIPermission
from cl.api.utils import LoggingMixin
@@ -10,7 +11,10 @@
class AudioViewSet(LoggingMixin, viewsets.ModelViewSet):
serializer_class = AudioSerializer
filterset_class = AudioFilter
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = (
"id",
"date_created",
diff --git a/cl/corpus_importer/management/commands/probe_iquery_pages_daemon.py b/cl/corpus_importer/management/commands/probe_iquery_pages_daemon.py
index 759700673e..8a99322eb2 100644
--- a/cl/corpus_importer/management/commands/probe_iquery_pages_daemon.py
+++ b/cl/corpus_importer/management/commands/probe_iquery_pages_daemon.py
@@ -81,7 +81,7 @@ def handle(self, *args, **options):
iterations_completed = 0
r = get_redis_interface("CACHE")
testing = True if testing_iterations else False
- while True and settings.IQUERY_PROBE_DAEMON_ENABLED:
+ while True and settings.IQUERY_CASE_PROBE_DAEMON_ENABLED:
for court_id in court_ids:
if r.exists(f"iquery:court_wait:{court_id}"):
continue
diff --git a/cl/corpus_importer/management/commands/scrape_pacer_free_opinions.py b/cl/corpus_importer/management/commands/scrape_pacer_free_opinions.py
index 43611d240f..08b2de837d 100644
--- a/cl/corpus_importer/management/commands/scrape_pacer_free_opinions.py
+++ b/cl/corpus_importer/management/commands/scrape_pacer_free_opinions.py
@@ -331,10 +331,10 @@ def get_pdfs(
throttle.update_min_items(min_items)
logger.info(
- f"Court cycle completed for: {row.court_id}. Current iteration: {cycle_checker.current_iteration}. Sleep 2 seconds "
+ f"Court cycle completed for: {row.court_id}. Current iteration: {cycle_checker.current_iteration}. Sleep 1 second "
f"before starting the next cycle."
)
- time.sleep(2)
+ time.sleep(1)
logger.info(f"Processing row id: {row.id} from {row.court_id}")
c = chain(
process_free_opinion_result.si(
diff --git a/cl/corpus_importer/management/commands/update_casenames_wl_dataset.py b/cl/corpus_importer/management/commands/update_casenames_wl_dataset.py
new file mode 100644
index 0000000000..c98d619b93
--- /dev/null
+++ b/cl/corpus_importer/management/commands/update_casenames_wl_dataset.py
@@ -0,0 +1,440 @@
+import logging
+import re
+import time
+from datetime import date, datetime
+
+import pandas as pd
+from django.core.management.base import BaseCommand, CommandError
+from django.db import transaction
+from django.db.models import Q, QuerySet
+from eyecite import get_citations
+from eyecite.models import FullCaseCitation
+from eyecite.tokenizers import HyperscanTokenizer
+from juriscraper.lib.string_utils import harmonize
+
+from cl.citations.utils import map_reporter_db_cite_type
+from cl.search.models import Citation, OpinionCluster
+
+logger = logging.getLogger(__name__)
+HYPERSCAN_TOKENIZER = HyperscanTokenizer(cache_dir=".hyperscan")
+
+# Compile regex pattern once for efficiency
+WORD_PATTERN = re.compile(r"\b\w+\b|\b\w+\.\b")
+
+FALSE_POSITIVES = {
+ "and",
+ "personal",
+ "restraint",
+ "matter",
+ "county",
+ "city",
+ "of",
+ "the",
+ "estate",
+ "in",
+ "inc",
+ "re",
+ "st",
+ "ex",
+ "rel",
+ "vs",
+ "for",
+}
+
+DATE_FORMATS = (
+ "%B %d, %Y",
+ "%d-%b-%y",
+ "%m/%d/%Y",
+ "%m/%d/%y",
+ "%b. %d, %Y",
+ "%Y-%m-%d",
+)
+
+
+def tokenize_case_name(case_name: str) -> set[str]:
+ """Tokenizes case name and removes single-character words except for letters with periods.
+
+ :param case_name: case name to tokenize
+ :return: list of words
+ """
+ words = []
+ for word in WORD_PATTERN.findall(case_name):
+ if len(word) > 1:
+ # Only keep words with more than one character
+ words.append(word.lower())
+
+ # Return only valid words
+ return set(words) - FALSE_POSITIVES
+
+
+def check_case_names_match(west_case_name: str, cl_case_name: str) -> bool:
+ """Compare two case name and decide whether they are the same or not
+
+ Tokenize each string, capturing both words and abbreviations with periods and
+ convert all words to lowercase for case-insensitive matching and check if there is
+ an overlap between case names
+
+ :param west_case_name: case name from csv
+ :param cl_case_name: case name from cluster
+ :return: True if they match else False
+ """
+
+ west_set = tokenize_case_name(west_case_name.lower())
+ cl_set = tokenize_case_name(cl_case_name.lower())
+
+ overlap = west_set & cl_set
+ if not overlap:
+ # if no hits no match on name - move along
+ return False
+
+ # Check for "v." in title
+ if "v." not in west_case_name.lower() or (
+ len(cl_set) == 1 or len(west_set) == 1
+ ):
+ # in the matter of Smith
+ # if no V. - likely an "in re" case and only match on at least 1 name
+ return True
+
+ # otherwise check if a match occurs on both sides of the `v.`
+ v_index = west_case_name.lower().index("v.")
+ hit_indices = [west_case_name.lower().find(hit) for hit in overlap]
+ return min(hit_indices) < v_index < max(hit_indices)
+
+
+def parse_date(date_str: str) -> date | None:
+ """Attempts to parse the filed date into a datetime object.
+
+ January 10, 1999
+ 24-Jul-97
+ 21-Jan-94
+ 1/17/1961
+ 12/1/1960
+ 26-Sep-00
+ Feb. 28, 2001
+ 2007-01-24
+
+ :param date_str: date string
+ :return: date object or none
+ """
+ for fmt in DATE_FORMATS:
+ try:
+ return datetime.strptime(date_str, fmt).date()
+ except (ValueError, TypeError):
+ continue
+ logger.warning("Invalid date format: %s", date_str)
+ return None
+
+
+def parse_citations(citation_strings: list[str]) -> list[dict]:
+ """Validate citations with Eyecite.
+
+ :param citation_strings: List of citation strings to validate.
+ :return: List of validated citation dictionaries with volume, reporter, and page.
+ """
+ validated_citations = []
+
+ for cite_str in citation_strings:
+ # Get citations from the string
+
+ # We find all the citations that could match a cluster to update the case name
+ found_cites = get_citations(cite_str, tokenizer=HYPERSCAN_TOKENIZER)
+ if not found_cites:
+ continue
+
+ citation = found_cites[0]
+
+ # Ensure we have valid citations to process
+ if isinstance(citation, FullCaseCitation):
+ volume = citation.groups.get("volume")
+
+ # Validate the volume
+ if not volume or not volume.isdigit():
+ continue
+
+ cite_type_str = citation.all_editions[0].reporter.cite_type
+ reporter_type = map_reporter_db_cite_type(cite_type_str)
+
+ # Append the validated citation as a dictionary
+ validated_citations.append(
+ {
+ "volume": citation.groups["volume"],
+ "reporter": citation.corrected_reporter(),
+ "page": citation.groups["page"],
+ "type": reporter_type,
+ }
+ )
+
+ return validated_citations
+
+
+def query_possible_matches(
+ valid_citations: list[dict], docket_number: str, date_filed: date
+) -> QuerySet[Citation]:
+ """Find matches for row data
+
+ It will remove duplicates, it could happen if we already have both citations, if we
+ have multiple matches, these must be unique
+
+ :param valid_citations: list of FullCaseCitation objects
+ :param docket_number: cleaned docket number from row
+ :param date_filed: formatted filed date from row
+
+ :return: list of matched OpinionCluster
+ """
+
+ citation_queries = Q()
+
+ for citation in valid_citations:
+ citation_query = Q(**citation) & Q(
+ cluster__docket__docket_number__contains=docket_number,
+ cluster__date_filed=date_filed,
+ )
+ citation_queries |= citation_query
+ possible_matches = (
+ Citation.objects.filter(citation_queries)
+ .select_related("cluster")
+ .distinct("cluster__id")
+ )
+
+ return possible_matches
+
+
+def update_matched_case_name(
+ matched_cluster: OpinionCluster, west_case_name: str
+) -> tuple[bool, bool]:
+ """Update case name of matched cluster and related docket if empty any of them
+
+ :param matched_cluster: OpinionCluster object
+ :param west_case_name: case name from csv row
+ :return: tuple with boolean values if cluster and related docket case name updated
+ """
+ cluster_case_name_updated = False
+ docket_case_name_updated = False
+
+ if not matched_cluster.case_name:
+ # Save case name in cluster when we don't have it
+ matched_cluster.case_name = west_case_name
+ matched_cluster.save()
+ logger.info("Case name updated for cluster id: %s", matched_cluster.id)
+ cluster_case_name_updated = True
+
+ if not matched_cluster.docket.case_name:
+ # Save case name in docket when we don't have it
+ matched_cluster.docket.case_name = west_case_name
+ matched_cluster.docket.save()
+ logger.info(
+ "Case name updated for docket id: %s", matched_cluster.docket.id
+ )
+ docket_case_name_updated = True
+
+ return cluster_case_name_updated, docket_case_name_updated
+
+
+def process_csv(
+ filepath: str,
+ delay: float,
+ dry_run: bool,
+ limit: int | None,
+ start_row: int,
+) -> None:
+ """Process rows from csv file
+
+ :param filepath: path to csv file
+ :param delay: delay between saves in seconds
+ :param dry_run: flag to simulate update process
+ :param limit: limit number of rows to process
+ :param start_row: start row
+ """
+
+ total_clusters_updated = 0
+ total_dockets_updated = 0
+ total_citations_added = 0
+
+ logger.info("Processing %s", filepath)
+
+ # Generate rows to skip, excluding the header row
+ skip_rows = list(range(1, start_row)) if start_row else None
+
+ df = pd.read_csv(filepath, skiprows=skip_rows, nrows=limit).dropna()
+
+ # Reset the index to start from 0 (needed if we pass skip_rows param)
+ df.reset_index(drop=True, inplace=True)
+
+ if start_row:
+ # Update rows index to reflect the original csv row numbers
+ df.index = range(start_row, start_row + len(df))
+
+ for row in df.itertuples():
+ index, case_name, court, date_str, cite1, cite2, docket, _ = row
+ west_case_name = harmonize(case_name)
+ clean_docket_num = docket.strip('="').strip('"')
+ if not clean_docket_num:
+ logger.info("Row index: %s - No docket number found.", index)
+ continue
+
+ date_filed = parse_date(date_str)
+ if not date_filed:
+ logger.info(
+ "Row index: %s - No valid date found: %s", index, date_str
+ )
+ continue
+
+ west_citations: list[str] = [cite1, cite2]
+ valid_citations = parse_citations(west_citations)
+
+ if not valid_citations:
+ logger.info("Row index: %s - Missing valid citations.", index)
+ continue
+
+ # Query for possible matches using data from row
+ possible_matches = query_possible_matches(
+ valid_citations=valid_citations,
+ docket_number=clean_docket_num,
+ date_filed=date_filed,
+ )
+
+ if not possible_matches:
+ logger.info("Row index: %s - No possible matches found.", index)
+ continue
+
+ matches = []
+ for match in possible_matches:
+ cl_case_name = (
+ match.cluster.case_name_full
+ if match.cluster.case_name_full
+ else match.cluster.case_name
+ )
+
+ case_name_match = check_case_names_match(
+ west_case_name, cl_case_name
+ )
+ if case_name_match:
+ matches.append(match.cluster)
+
+ if len(matches) == 0:
+ # No match found within possible matches, go to next row
+ logger.info(
+ "Row index: %s - No match found within possible matches.",
+ index,
+ )
+ continue
+ elif len(matches) > 1:
+ # More than one match, log and go to next row
+ matches_found = ", ".join([str(cluster.id) for cluster in matches])
+ logger.warning(
+ "Row index: %s - Multiple matches found: %s",
+ index,
+ matches_found,
+ )
+ continue
+
+ # Single match found
+ logger.info(
+ "Row index: %s - Match found: %s - West case name: %s",
+ index,
+ matches[0].id,
+ west_case_name,
+ )
+
+ if dry_run:
+ # Dry run, don't save anything
+ continue
+
+ with transaction.atomic():
+ matched_cluster = matches[0]
+
+ # Update case names
+ cluster_updated, docket_updated = update_matched_case_name(
+ matched_cluster, west_case_name
+ )
+
+ if cluster_updated:
+ total_clusters_updated = +1
+
+ if docket_updated:
+ total_dockets_updated = +1
+
+ # Add any of the citations if possible
+ for citation in valid_citations:
+
+ citation["cluster_id"] = matched_cluster.id
+ if Citation.objects.filter(**citation).exists():
+ # We already have the citation
+ continue
+ elif Citation.objects.filter(
+ cluster_id=citation["cluster_id"],
+ reporter=citation.get("reporter"),
+ ).exists():
+ # # Same reporter, different citation, revert changes
+ logger.warning(
+ "Row index: %s - Revert changes for cluster id: %s",
+ index,
+ matched_cluster.id,
+ )
+ transaction.set_rollback(True)
+ break
+ else:
+ new_citation = Citation.objects.create(**citation)
+ logger.info(
+ "New citation added: %s to cluster id: %s",
+ new_citation,
+ matched_cluster.id,
+ )
+ total_citations_added += 1
+
+ # Wait between each processed row to avoid sending to many indexing tasks
+ time.sleep(delay)
+
+ logger.info("Clusters updated: %s", total_clusters_updated)
+ logger.info("Dockets updated: %s", total_dockets_updated)
+ logger.info("Citations added: %s", total_citations_added)
+
+
+class Command(BaseCommand):
+ help = "Match and compare case details from a CSV file with existing records in the database."
+
+ def add_arguments(self, parser):
+ parser.add_argument(
+ "--filepath",
+ type=str,
+ required=True,
+ help="Path to the CSV file to process.",
+ )
+ parser.add_argument(
+ "--delay",
+ type=float,
+ default=0.1,
+ help="How long to wait to update each opinion cluster (in seconds, allows floating numbers).",
+ )
+ parser.add_argument(
+ "--dry-run",
+ action="store_true",
+ help="Simulate the update process without making changes",
+ )
+ parser.add_argument(
+ "--start-row",
+ default=0,
+ type=int,
+ help="Start row (inclusive).",
+ )
+ parser.add_argument(
+ "--limit",
+ default=None,
+ type=int,
+ help="Limit number of rows to process.",
+ required=False,
+ )
+
+ def handle(self, *args, **options):
+ filepath = options["filepath"]
+ delay = options["delay"]
+ dry_run = options["dry_run"]
+ limit = options["limit"]
+ start_row = options["start_row"]
+
+ if not filepath:
+ raise CommandError(
+ "Filepath is required. Use --filepath to specify the CSV file location."
+ )
+
+ process_csv(filepath, delay, dry_run, limit, start_row)
diff --git a/cl/corpus_importer/signals.py b/cl/corpus_importer/signals.py
index d2443b62f3..08254d7d85 100644
--- a/cl/corpus_importer/signals.py
+++ b/cl/corpus_importer/signals.py
@@ -76,6 +76,10 @@ def update_latest_case_id_and_schedule_iquery_sweep(docket: Docket) -> None:
countdown=task_scheduled_countdown,
queue=settings.CELERY_IQUERY_QUEUE,
)
+ logger.info(
+ f"Enqueued iquery docket case ID: {iquery_pacer_case_id_current} "
+ f"for court {court_id} with countdown {task_scheduled_countdown}"
+ )
# Update the iquery_pacer_case_id_current in Redis
r.hset(
diff --git a/cl/corpus_importer/tasks.py b/cl/corpus_importer/tasks.py
index 8ed46333f7..bfa21e43b5 100644
--- a/cl/corpus_importer/tasks.py
+++ b/cl/corpus_importer/tasks.py
@@ -25,6 +25,7 @@
from httpx import (
HTTPStatusError,
NetworkError,
+ ReadError,
RemoteProtocolError,
TimeoutException,
)
@@ -598,6 +599,7 @@ def process_free_opinion_result(
ConnectionError,
ReadTimeout,
RedisConnectionError,
+ ReadError,
),
max_retries=15,
interval_start=5,
diff --git a/cl/corpus_importer/tests.py b/cl/corpus_importer/tests.py
index 7a76435ded..5b3d858897 100644
--- a/cl/corpus_importer/tests.py
+++ b/cl/corpus_importer/tests.py
@@ -62,6 +62,9 @@
log_added_items_to_redis,
merge_rss_data,
)
+from cl.corpus_importer.management.commands.update_casenames_wl_dataset import (
+ check_case_names_match,
+)
from cl.corpus_importer.signals import (
handle_update_latest_case_id_and_schedule_iquery_sweep,
update_latest_case_id_and_schedule_iquery_sweep,
@@ -3343,7 +3346,7 @@ def test_merger(self):
@patch("cl.corpus_importer.tasks.get_or_cache_pacer_cookies")
@override_settings(
- IQUERY_PROBE_DAEMON_ENABLED=True,
+ IQUERY_CASE_PROBE_DAEMON_ENABLED=True,
IQUERY_SWEEP_UPLOADS_SIGNAL_ENABLED=True,
EGRESS_PROXY_HOSTS=["http://proxy_1:9090", "http://proxy_2:9090"],
)
@@ -4078,3 +4081,56 @@ def test_probe_iquery_pages_daemon_court_got_stuck(
f"iquery:court_empty_probe_attempts:{self.court_cacd.pk}"
)
self.assertEqual(int(court_empty_attempts), 0)
+
+
+class CaseNamesTest(SimpleTestCase):
+ def test_check_case_names_match(self) -> None:
+ """Can we check if the case names match?"""
+ case_names_tests = (
+ (
+ "U.S. v. Smith",
+ "United States v. Smith",
+ True,
+ ),
+ (
+ "United States v. Guerrero-Martinez", # 736793
+ "United States v. Hector Guerrero-Martinez, AKA Hector Guerrero AKA Hector Martinez-Guerrero",
+ True,
+ ),
+ (
+ "In re CP", # 2140442
+ "In Re CP",
+ True,
+ ),
+ (
+ "Dennis v. City of Easton", # 730246
+ "Richard Dennis, Penelope Dennis, Loretta M. Dennis v. City of Easton, Edward J. Ferraro, Robet S. Stein, Doris Asteak, Paul Schleuter, Howard B. White, Easton Board of Health",
+ True,
+ ),
+ (
+ "Parmelee v. Bruggeman", # 736598
+ "Allan Parmelee v. Milford Bruggeman Janine Bruggeman Friend of the Court for the State of Michigan Nancy Rose, Employee of the State of Michigan for the Friend of the Court Glenda Friday, Employee of the State of Michigan for the Friend of the Court Karen Dunn, Employee of the State of Michigan for the Friend of the Court Thomas Kreckman, Employee of the State of Michigan for the Friend of the Court State of Michigan",
+ True,
+ ),
+ (
+ "Automobile Assur. Financial Corp. v. Syrett Corp.", # 735935
+ "Automobile Assurance Financial Corporation, a Utah Corporation Venuti and Associates, Inc., a Utah Corporation Venuti Partners, Ltd., a Utah Limited Partnership Frank P. Venuti, an Individual, Parker M. Nielson v. Syrett Corporation, a Delaware Corporation, Formerly a Utah Corporation, John R. Riley, an Individual, Third-Party-Defendant",
+ True,
+ ),
+ (
+ "Christopher Ambroze, M.D., PC v. Aetna Health Plans of New York, Inc.", # 735476
+ "Christopher Ambroze, M.D., P.C., Rockville Anesthesia Group, Llp, Harvey Finkelstein, Plainview Anesthesiologists, P.C., Joseph A. Singer, Atlantic Anesthesia Associates, P.C. v. Aetna Health Plans of New York, Inc., Aetna Health Management, Inc., Aetna Life and Casualty Company, C. Frederick Berger, and Gregg Stolzberg",
+ True,
+ ),
+ (
+ "O'Neal v. Merkel", # 730350
+ "Terence Kenneth O'Neal v. T.E. Merkel Nurse Cashwell Nurse Allen Nurse Davis Mr. Conn, and Franklin E. Freeman, Jr. Gary Dixon Doctor Lowy Doctor Shaw Doctor Castalloe Harry Allsbrook Mr. Cherry",
+ True,
+ ),
+ )
+ for wl_casename, cl_casename, overlap in case_names_tests:
+ self.assertEqual(
+ check_case_names_match(wl_casename, cl_casename),
+ overlap,
+ msg=f"Case names don't match: {wl_casename} - {cl_casename}",
+ )
diff --git a/cl/custom_filters/templatetags/extras.py b/cl/custom_filters/templatetags/extras.py
index 39d535b2df..6532ca2881 100644
--- a/cl/custom_filters/templatetags/extras.py
+++ b/cl/custom_filters/templatetags/extras.py
@@ -1,7 +1,7 @@
import random
import re
import urllib.parse
-from datetime import datetime
+from datetime import datetime, timezone
import waffle
from django import template
@@ -337,6 +337,21 @@ def format_date(date_str: str) -> str:
return date_str
+@register.filter
+def datetime_in_utc(date_obj) -> str:
+ """Formats a datetime object in UTC with timezone displayed.
+ For example: 'Nov. 25, 2024, 01:28 p.m. UTC'"""
+ if date_obj is None:
+ return ""
+ try:
+ return date_filter(
+ date_obj.astimezone(timezone.utc),
+ "M. j, Y, h:i a T",
+ )
+ except (ValueError, TypeError):
+ return date_obj
+
+
@register.filter
def build_docket_id_q_param(request_q: str, docket_id: str) -> str:
"""Build a query string that includes the docket ID and any existing query
diff --git a/cl/disclosures/api_views.py b/cl/disclosures/api_views.py
index 1c1be6f3a4..64ce52bac4 100644
--- a/cl/disclosures/api_views.py
+++ b/cl/disclosures/api_views.py
@@ -1,4 +1,5 @@
from rest_framework import viewsets
+from rest_framework.permissions import DjangoModelPermissionsOrAnonReadOnly
from cl.api.api_permissions import V3APIPermission
from cl.api.utils import LoggingMixin
@@ -40,7 +41,10 @@
class AgreementViewSet(LoggingMixin, viewsets.ModelViewSet):
queryset = Agreement.objects.all().order_by("-id")
serializer_class = AgreementSerializer
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
filterset_class = AgreementFilter
# Default cursor ordering key
@@ -56,7 +60,10 @@ class AgreementViewSet(LoggingMixin, viewsets.ModelViewSet):
class DebtViewSet(LoggingMixin, viewsets.ModelViewSet):
queryset = Debt.objects.all().order_by("-id")
serializer_class = DebtSerializer
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
filterset_class = DebtFilter
# Default cursor ordering key
@@ -87,7 +94,10 @@ class FinancialDisclosureViewSet(LoggingMixin, viewsets.ModelViewSet):
)
serializer_class = FinancialDisclosureSerializer
filterset_class = FinancialDisclosureFilter
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
# Default cursor ordering key
ordering = "-id"
@@ -103,7 +113,10 @@ class GiftViewSet(LoggingMixin, viewsets.ModelViewSet):
queryset = Gift.objects.all().order_by("-id")
serializer_class = GiftSerializer
filterset_class = GiftFilter
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
# Default cursor ordering key
ordering = "-id"
@@ -119,7 +132,10 @@ class InvestmentViewSet(LoggingMixin, viewsets.ModelViewSet):
queryset = Investment.objects.all().order_by("-id")
serializer_class = InvestmentSerializer
filterset_class = InvestmentFilter
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
# Default cursor ordering key
ordering = "-id"
@@ -135,7 +151,10 @@ class NonInvestmentIncomeViewSet(LoggingMixin, viewsets.ModelViewSet):
queryset = NonInvestmentIncome.objects.all().order_by("-id")
serializer_class = NonInvestmentIncomeSerializer
filterset_class = NonInvestmentIncomeFilter
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
# Default cursor ordering key
ordering = "-id"
@@ -151,7 +170,10 @@ class PositionViewSet(LoggingMixin, viewsets.ModelViewSet):
queryset = Position.objects.all().order_by("-id")
serializer_class = PositionSerializer
filterset_class = PositionFilter
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
# Default cursor ordering key
ordering = "-id"
@@ -167,7 +189,10 @@ class ReimbursementViewSet(LoggingMixin, viewsets.ModelViewSet):
queryset = Reimbursement.objects.all().order_by("-id")
serializer_class = ReimbursementSerializer
filterset_class = ReimbursementFilter
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
# Default cursor ordering key
ordering = "-id"
@@ -183,7 +208,10 @@ class SpouseIncomeViewSet(LoggingMixin, viewsets.ModelViewSet):
queryset = SpouseIncome.objects.all().order_by("-id")
serializer_class = SpouseIncomeSerializer
filterset_class = SpouseIncomeFilter
- permission_classes = [V3APIPermission]
+ permission_classes = [
+ DjangoModelPermissionsOrAnonReadOnly,
+ V3APIPermission,
+ ]
ordering_fields = ("id", "date_created", "date_modified")
# Default cursor ordering key
ordering = "-id"
diff --git a/cl/favorites/tests.py b/cl/favorites/tests.py
index e2aa34ab56..9518884d95 100644
--- a/cl/favorites/tests.py
+++ b/cl/favorites/tests.py
@@ -14,6 +14,7 @@
from django.utils.timezone import make_naive, now
from selenium.webdriver.common.by import By
from timeout_decorator import timeout_decorator
+from waffle.testutils import override_flag
from cl.custom_filters.templatetags.pacer import price
from cl.favorites.factories import NoteFactory, PrayerFactory
@@ -107,6 +108,7 @@ def setUp(self) -> None:
super().setUp()
@timeout_decorator.timeout(SELENIUM_TIMEOUT)
+ @override_flag("ui_flag_for_o", False)
def test_anonymous_user_is_prompted_when_favoriting_an_opinion(
self,
) -> None:
@@ -167,6 +169,7 @@ def test_anonymous_user_is_prompted_when_favoriting_an_opinion(
modal_title = self.browser.find_element(By.ID, "save-note-title")
self.assertIn("Save Note", modal_title.text)
+ @override_flag("ui_flag_for_o", False)
@timeout_decorator.timeout(SELENIUM_TIMEOUT)
def test_logged_in_user_can_save_note(self) -> None:
# Meta: assure no Faves even if part of fixtures
diff --git a/cl/lib/command_utils.py b/cl/lib/command_utils.py
index 2c3797f9f5..ee86463812 100644
--- a/cl/lib/command_utils.py
+++ b/cl/lib/command_utils.py
@@ -3,6 +3,8 @@
from django.core.management import BaseCommand, CommandError
+from cl.lib.juriscraper_utils import get_module_by_court_id
+
logger = logging.getLogger(__name__)
@@ -22,6 +24,40 @@ def handle(self, *args, **options):
juriscraper_logger.setLevel(logging.DEBUG)
+class ScraperCommand(VerboseCommand):
+ """Base class for cl.scrapers commands that use Juriscraper
+
+ Implements the `--courts` argument to lookup for a Site object
+ """
+
+ # To be used on get_module_by_court_id
+ # Defined by inheriting classes
+ juriscraper_module_type = ""
+
+ def add_arguments(self, parser):
+ parser.add_argument(
+ "--courts",
+ dest="court_id",
+ metavar="COURTID",
+ type=lambda s: (
+ s
+ if "." in s
+ else get_module_by_court_id(s, self.juriscraper_module_type)
+ ),
+ required=True,
+ help=(
+ "The court(s) to scrape and extract. One of: "
+ "1. a python module or package import from the Juriscraper library, e.g."
+ "'juriscraper.opinions.united_states.federal_appellate.ca1' "
+ "or simply 'juriscraper.opinions' to do all opinions."
+ ""
+ "2. a court_id, to be used to lookup for a full module path"
+ "An error will be raised if the `court_id` matches more than "
+ "one module path. In that case, use the full path"
+ ),
+ )
+
+
class CommandUtils:
"""A mixin to give some useful methods to sub classes."""
diff --git a/cl/lib/elasticsearch_utils.py b/cl/lib/elasticsearch_utils.py
index 129115ff20..93d15948ad 100644
--- a/cl/lib/elasticsearch_utils.py
+++ b/cl/lib/elasticsearch_utils.py
@@ -4,6 +4,7 @@
import re
import time
import traceback
+from collections import defaultdict
from copy import deepcopy
from dataclasses import fields
from functools import reduce, wraps
@@ -175,22 +176,45 @@ async def build_more_like_this_query(related_ids: list[str]) -> Query:
exclusions for specific opinion clusters.
"""
- document_list = [{"_id": f"o_{id}"} for id in related_ids]
+ opinion_cluster_pairs = [
+ opinion_pair
+ for opinion_id in related_ids
+ if (
+ opinion_pair := await Opinion.objects.filter(pk=opinion_id)
+ .values("pk", "cluster_id")
+ .afirst()
+ )
+ ]
+ unique_clusters = {pair["cluster_id"] for pair in opinion_cluster_pairs}
+
+ document_list = [
+ {
+ "_id": f'o_{pair["pk"]}',
+ "routing": pair["cluster_id"],
+ # Important to match documents in the production cluster
+ }
+ for pair in opinion_cluster_pairs
+ ] or [
+ {"_id": f"o_{pk}"} for pk in related_ids
+ ] # Fallback in case IDs are not found in the database.
+ # The user might have provided non-existent Opinion IDs.
+ # This ensures that the query does not raise an error and instead returns
+ # no results.
+
more_like_this_fields = SEARCH_MLT_OPINION_QUERY_FIELDS.copy()
mlt_query = Q(
"more_like_this",
fields=more_like_this_fields,
like=document_list,
- min_term_freq=1,
- max_query_terms=12,
+ min_term_freq=settings.RELATED_MLT_MINTF,
+ max_query_terms=settings.RELATED_MLT_MAXQT,
+ min_word_length=settings.RELATED_MLT_MINWL,
+ max_word_length=settings.RELATED_MLT_MAXWL,
+ max_doc_freq=settings.RELATED_MLT_MAXDF,
+ analyzer="search_analyzer_exact",
)
# Exclude opinion clusters to which the related IDs to query belong.
- cluster_ids_to_exclude = (
- OpinionCluster.objects.filter(sub_opinions__pk__in=related_ids)
- .distinct("pk")
- .values_list("pk", flat=True)
- )
- cluster_ids_list = [pk async for pk in cluster_ids_to_exclude.aiterator()]
+ cluster_ids_list = list(unique_clusters)
exclude_cluster_ids = [Q("terms", cluster_id=cluster_ids_list)]
bool_query = Q("bool", must=[mlt_query], must_not=exclude_cluster_ids)
return bool_query
@@ -1239,7 +1263,7 @@ def build_es_base_query(
{"opinion": []},
[],
mlt_query,
- child_highlighting=False,
+ child_highlighting=True,
api_version=api_version,
)
)
@@ -1281,6 +1305,7 @@ def build_es_base_query(
mlt_query,
child_highlighting=child_highlighting,
api_version=api_version,
+ alerts=alerts,
)
)
@@ -2964,9 +2989,10 @@ def do_es_api_query(
child documents.
"""
+ alerts = True if hl_tag == ALERTS_HL_TAG else False
try:
es_queries = build_es_base_query(
- search_query, cd, cd["highlight"], api_version
+ search_query, cd, cd["highlight"], api_version, alerts=alerts
)
s = es_queries.search_query
child_docs_query = es_queries.child_query
@@ -3047,7 +3073,7 @@ def do_es_api_query(
# parameters as in the frontend. Only switch highlighting according
# to the user request.
main_query = add_es_highlighting(
- s, cd, highlighting=cd["highlight"]
+ s, cd, alerts=alerts, highlighting=cd["highlight"]
)
return main_query, child_docs_query
@@ -3081,7 +3107,7 @@ def build_cardinality_count(count_query: Search, unique_field: str) -> Search:
def do_collapse_count_query(
search_type: str, main_query: Search, query: Query
-) -> int | None:
+) -> int:
"""Execute an Elasticsearch count query for queries that uses collapse.
Uses a query with aggregation to determine the number of unique opinions
based on the 'cluster_id' or 'docket_id' according to the search_type.
@@ -3106,7 +3132,7 @@ def do_collapse_count_query(
f"Error on count query request: {search_query.to_dict()}"
)
logger.warning(f"Error was: {e}")
- total_results = None
+ total_results = 0
return total_results
@@ -3216,18 +3242,20 @@ def do_es_sweep_alert_query(
multi_search = multi_search.add(main_query)
if parent_query:
parent_search = search_query.query(parent_query)
+ # Ensure accurate tracking of total hit count for up to 10,001 query results
parent_search = parent_search.extra(
- from_=0, size=settings.SCHEDULED_ALERT_HITS_LIMIT
+ from_=0,
+ track_total_hits=settings.ELASTICSEARCH_MAX_RESULT_COUNT + 1,
)
parent_search = parent_search.source(includes=["docket_id"])
multi_search = multi_search.add(parent_search)
if child_query:
child_search = child_search_query.query(child_query)
+ # Ensure accurate tracking of total hit count for up to 10,001 query results
child_search = child_search.extra(
from_=0,
- size=settings.SCHEDULED_ALERT_HITS_LIMIT
- * settings.RECAP_CHILD_HITS_PER_RESULT,
+ track_total_hits=settings.ELASTICSEARCH_MAX_RESULT_COUNT + 1,
)
child_search = child_search.source(includes=["id"])
multi_search = multi_search.add(child_search)
@@ -3241,15 +3269,45 @@ def do_es_sweep_alert_query(
if child_query:
rd_results = responses[2]
+ # Re-run parent query to fetch potentially missed docket IDs due to large
+ # result sets.
+ should_repeat_parent_query = (
+ docket_results
+ and docket_results.hits.total.value
+ >= settings.ELASTICSEARCH_MAX_RESULT_COUNT
+ )
+ if should_repeat_parent_query:
+ docket_ids = [int(d.docket_id) for d in main_results]
+ # Adds extra filter to refine results.
+ parent_query.filter.append(Q("terms", docket_id=docket_ids))
+ parent_search = search_query.query(parent_query)
+ parent_search = parent_search.source(includes=["docket_id"])
+ docket_results = parent_search.execute()
+
limit_inner_hits({}, main_results, cd["type"])
set_results_highlights(main_results, cd["type"])
- for result in main_results:
- child_result_objects = []
- if hasattr(result, "child_docs"):
- for child_doc in result.child_docs:
- child_result_objects.append(child_doc.to_dict())
- result["child_docs"] = child_result_objects
+ # This block addresses a potential issue where the initial child query
+ # might not return all expected results, especially when the result set is
+ # large. To ensure complete data retrieval, it extracts child document IDs
+ # from the main results and refines the child query filter with these IDs.
+ # Finally, it re-executes the child search.
+ should_repeat_child_query = (
+ rd_results
+ and rd_results.hits.total.value
+ >= settings.ELASTICSEARCH_MAX_RESULT_COUNT
+ )
+ if should_repeat_child_query:
+ rd_ids = [
+ int(rd["_source"]["id"])
+ for docket in main_results
+ if hasattr(docket, "child_docs")
+ for rd in docket.child_docs
+ ]
+ child_query.filter.append(Q("terms", id=rd_ids))
+ child_search = child_search_query.query(child_query)
+ child_search = child_search.source(includes=["id"])
+ rd_results = child_search.execute()
return main_results, docket_results, rd_results
@@ -3279,3 +3337,45 @@ def simplify_estimated_count(search_count: int) -> int:
zeroes = (len(search_count_str) - 2) * "0"
return int(first_two + zeroes)
return search_count
+
+
+def set_child_docs_and_score(
+ results: list[Hit] | list[dict[str, Any]] | Response,
+ merge_highlights: bool = False,
+ merge_score: bool = False,
+) -> None:
+ """Process and attach child documents to the main search results.
+
+ :param results: A list of search results, which can be ES Hit objects
+ or a list of dicts.
+ :param merge_highlights: A boolean indicating whether to merge
+ highlight data into the results.
+ :param merge_score: A boolean indicating whether to merge
+ the BM25 score into the results.
+ :return: None. Results are modified in place.
+ """
+
+ for result in results:
+ result_is_dict = isinstance(result, dict)
+ if result_is_dict:
+ # If the result is a dictionary, do nothing, or assign [] to
+ # child_docs if it is not present.
+ result["child_docs"] = result.get("child_docs", [])
+ else:
+ # Process child hits if the result is an ES AttrDict instance,
+ # so they can be properly serialized.
+ child_docs = getattr(result, "child_docs", [])
+ result["child_docs"] = [
+ defaultdict(lambda: None, doc["_source"].to_dict())
+ for doc in child_docs
+ ]
+
+ # Optionally merges highlights. Used for integrating percolator
+ # highlights into the percolated document.
+ if merge_highlights and result_is_dict:
+ meta_hl = result.get("meta", {}).get("highlight", {})
+ merge_highlights_into_result(meta_hl, result)
+
+ # Optionally merges the BM25 score for display in the API.
+ if merge_score and isinstance(result, AttrDict):
+ result["bm25_score"] = result.meta.score
diff --git a/cl/lib/juriscraper_utils.py b/cl/lib/juriscraper_utils.py
index ae8c090f41..f2484e8b86 100644
--- a/cl/lib/juriscraper_utils.py
+++ b/cl/lib/juriscraper_utils.py
@@ -5,6 +5,12 @@
import juriscraper
+def walk_juriscraper():
+ return pkgutil.walk_packages(
+ juriscraper.__path__, f"{juriscraper.__name__}."
+ )
+
+
def get_scraper_object_by_name(court_id: str, juriscraper_module: str = ""):
"""Identify and instantiate a Site() object given the name of a court
@@ -25,9 +31,7 @@ def get_scraper_object_by_name(court_id: str, juriscraper_module: str = ""):
return importlib.import_module(juriscraper_module).Site()
- for _, full_module_path, _ in pkgutil.walk_packages(
- juriscraper.__path__, f"{juriscraper.__name__}."
- ):
+ for _, full_module_path, _ in walk_juriscraper():
# Get the module name from the full path and trim
# any suffixes like _p, _u
module_name = full_module_path.rsplit(".", 1)[1].rsplit("_", 1)[0]
@@ -42,3 +46,45 @@ def get_scraper_object_by_name(court_id: str, juriscraper_module: str = ""):
# has been stripped off it. In any case, just ignore it when
# this happens.
continue
+
+
+def get_module_by_court_id(court_id: str, module_type: str) -> str:
+ """Given a `court_id` return a juriscraper module path
+
+ Some court_ids match multiple scraper files. These will force the user
+ to use the full module path. For example, "lactapp_1" and "lactapp_5"
+ match the same `court_id`, but scrape totally different sites, and
+ their Site objects are expected to have different `extract_from_text`
+ behavior
+
+ :param court_id: court id to look for
+ :param module_type: 'opinions' or 'oral_args'. Without this, some
+ court_ids may match the 2 classes of scrapers
+
+ :raises: ValueError if there is no match or there is more than 1 match
+ :return: the full module path
+ """
+ if module_type not in ["opinions", "oral_args"]:
+ raise ValueError(
+ "module_type has to be one of ['opinions', 'oral_args']"
+ )
+
+ matches = []
+ for _, module_string, _ in walk_juriscraper():
+ if module_string.count(".") != 4 or module_type not in module_string:
+ # Skip folder and lib modules. Skip type
+ continue
+
+ module_court_id = module_string.rsplit(".", 1)[1].rsplit("_", 1)[0]
+ if module_court_id == court_id:
+ matches.append(module_string)
+
+ if len(matches) == 1:
+ return matches[0]
+ elif len(matches) == 0:
+ raise ValueError(f"'{court_id}' doesn't match any juriscraper module")
+ else:
+ raise ValueError(
+ f"'{court_id}' matches more than 1 juriscraper module."
+ f"Use a full module path. Matches: '{matches}'"
+ )
diff --git a/cl/lib/utils.py b/cl/lib/utils.py
index 223056420f..592f8876d0 100644
--- a/cl/lib/utils.py
+++ b/cl/lib/utils.py
@@ -248,7 +248,7 @@ def cleanup_main_query(query_string: str) -> str:
"""
inside_a_phrase = False
cleaned_items = []
- for item in re.split(r'([^a-zA-Z0-9_\-~":]+)', query_string):
+ for item in re.split(r'([^a-zA-Z0-9_\-^~":]+)', query_string):
if not item:
continue
diff --git a/cl/opinion_page/templates/includes/add_download_button.html b/cl/opinion_page/templates/includes/add_download_button.html
new file mode 100644
index 0000000000..bcd7a508ea
--- /dev/null
+++ b/cl/opinion_page/templates/includes/add_download_button.html
@@ -0,0 +1,46 @@
+
+ {% for group in parenthetical_groups %}
+ {% with representative=group.representative %}
+ {% with representative_cluster=representative.describing_opinion.cluster %}
+
+
+ {% for summary in group.parentheticals.all %}
+ {% with describing_cluster=summary.describing_opinion.cluster %}
+ {% if summary != representative %}
+
+ Source:
+
+ {% if cluster.filepath_pdf_harvard %}
+ Case Law Access Project
+ {% elif pdf_path %}
+ {{ cluster.docket.court }}
+ {% endif %}
+
+
+
+
+
+
+
+
+{% else %}
+
+ {# The section of the document I refer to as headmatter goes here #}
+
+
+ {% with opinion_count=cluster.sub_opinions.all.count %}
+
+ {% if cluster.headmatter %}
+
Headmatter
+
+
+ {{ cluster.headmatter|safe }}
+
+ {% else %}
+ {% if cluster.correction %}
+
Correction
+
+
+ {{ cluster.correction|safe }}
+
+ {% endif %}
+
+ {% if cluster.attorneys %}
+
Attorneys
+
+
+
{{ cluster.attorneys|safe|linebreaksbr }}
+
+ {% endif %}
+
+ {% if cluster.headnotes %}
+
Headnotes
+
+
{{ cluster.headnotes | safe}}
+ {% endif %}
+
+ {% if cluster.syllabus %}
+
Syllabus
+
+
+ {{ cluster.syllabus|safe }}
+
+ {% endif %}
+
+ {% if cluster.summary %}
+
Summary
+
+
+ {{ cluster.summary|safe }}
+
+ {% endif %}
+ {% if cluster.history %}
+
History
+
+
+ {{ cluster.history|safe }}
+
+ {% endif %}
+
+ {% if cluster.disposition %}
+
Disposition
+
+
+ {{ cluster.disposition|safe }}
+
+ {% endif %}
+ {% endif %}
+
+ {% for sub_opinion in cluster.ordered_opinions %}
+
+ {{ sub_opinion.get_type_display }}
+ {% if sub_opinion.author %}
+ by {{ sub_opinion.author.name_full }}
+ {% elif sub_opinion.author_str %}
+ by {{ sub_opinion.author_str }}
+ {% endif %}
+
+
+
+ {% if 'U' in cluster.source %}
+
+ {% elif 'Z' in cluster.source %}
+
+ {% elif 'L' in cluster.source %}
+
+ {% elif 'R' in cluster.source %}
+
+ {% else %}
+
+ {% endif %}
+
+
+ {% if sub_opinion.xml_harvard and sub_opinion.html_with_citations %}
+
{{ sub_opinion.html_with_citations|safe }}
+ {% elif sub_opinion.xml_harvard %}
+
{{ sub_opinion.xml_harvard|safe }}
+ {% elif sub_opinion.html_with_citations %}
+ {% if cluster.source == "C" %}
+ {# It's a PDF with no HTML enrichment#}
+ {% if sub_opinion.html %}
+ {# for scrpaed HTML eg. Colo, Okla we do not want to insert line breaks #}
+
+
+{% endif %}
\ No newline at end of file
diff --git a/cl/opinion_page/templates/opinion.html b/cl/opinion_page/templates/opinion.html
index 16a33820fd..a0c4c797c7 100644
--- a/cl/opinion_page/templates/opinion.html
+++ b/cl/opinion_page/templates/opinion.html
@@ -100,7 +100,7 @@
Summaries ({{ summaries_count|intcomma }})
{% endfor %}
-
View All Summaries
diff --git a/cl/opinion_page/templates/opinions.html b/cl/opinion_page/templates/opinions.html
new file mode 100644
index 0000000000..3cb9746763
--- /dev/null
+++ b/cl/opinion_page/templates/opinions.html
@@ -0,0 +1,337 @@
+{% extends "base.html" %}
+{% load extras %}
+{% load humanize %}
+{% load static %}
+{% load text_filters %}
+
+
+{% block canonical %}{% get_canonical_element %}{% endblock %}
+{% block title %}{{ title }} – CourtListener.com{% endblock %}
+{% block og_title %}{{ title }} – CourtListener.com{% endblock %}
+{% block description %}{{ title }} — Brought to you by Free Law Project, a non-profit dedicated to creating high quality open legal information.{% endblock %}
+{% block og_description %}{{ cluster|best_case_name }}{% if summaries_count > 0 %} — {{ top_parenthetical_groups.0.representative.text|capfirst }}{% else %} — Brought to you by Free Law Project, a non-profit dedicated to creating high quality open legal information.{% endif %}
+{% endblock %}
+
+{% block head %}
+
+
+
+{% endblock %}
+
+
+{% block navbar-o %}active{% endblock %}
+
+
+{% block sidebar %}
+
+
+ {# show the admin tools if applicable #}
+ {% if perms.search.change_docket or perms.search.change_opinioncluster or perms.search.change_citation %}
+
+
Admin
+
+ {% if perms.search.change_docket %}
+ Docket
+ {% endif %}
+ {% if perms.search.change_opinioncluster %}
+ Cluster
+ {% endif %}
+ {% if perms.search.change_opinion %}
+ {% for sub_opinion in cluster.sub_opinions.all|dictsort:"type" %}
+ {{ sub_opinion.get_type_display|cut:"Opinion" }} opinion
+ {% endfor %}
+ {% endif %}
+ {% if request.user.is_superuser %}
+ {% if private %}
+
+ This page displays all the citations that have been extracted and linked in our system. Please note, it does not serve as a comprehensive list of all citations within the document.
+
+ The Related Cases query is used to find legal cases
+ related to a given case by analyzing textual similarities.
+ It identifies and retrieves cases with similar content,
+ allowing for the generation of a summary of related cases,
+ including their names, links, and filing dates,
+ to help users explore precedents or comparable rulings.
+
+
+
+ {% endif %}
+
+ {% if tab == "summaries" %}
+
+
+ Summaries or parenthetical groupings are used to
+ provide concise explanations or clarifications about a
+ case’s procedural posture, legal principles, or
+ facts that are immediately relevant to the citation,
+ typically enclosed in parentheses following a case citation.
+
+
+
+
+ {% endif %}
+
+
+
+ {# Sponsored by #}
+ {% if sponsored %}
+
+
+ Sponsored By
+
+
This opinion added to CourtListener with support from v|Lex.
{% if not webhook_event.debug %}{% if webhook_event.next_retry_date %}{{ webhook_event.next_retry_date }}{% else %}-{% endif %}{% else %}Test events will not be retried{% endif %}
+
{% if not webhook_event.debug %}{% if webhook_event.next_retry_date %}{{ webhook_event.next_retry_date|datetime_in_utc }}{% else %}-{% endif %}{% else %}Test events will not be retried{% endif %}