diff --git a/_events/2024-11-13-unicef-uk-digital-week.md b/_events/2024-11-13-unicef-uk-digital-week.md
new file mode 100644
index 0000000..97a6e17
--- /dev/null
+++ b/_events/2024-11-13-unicef-uk-digital-week.md
@@ -0,0 +1,71 @@
+---
+layout: event
+title: "UNICEF UK Digital Week: Risks and Benefits of AI"
+image: cbd-logo.png
+upcoming: false
+featured: false
+writeup: true
+date: 2024-11-13
+author: Tim Davies
+category: speaking
+link:
+project: Connected Conversations
+---
+
+Tim spoke as part of a panel for UNICEF UK's in-house Digital Week on the Risk and Benefits of AI.
+
+
+
+The discussion touched on three key areas. My tidied up speaking notes in response to the prepared questions are below. This might vary a little from the remarks I delivered, but give a sense of the territory covered.
+
+## Is AI the next evolution of Digital? If so, how should a children's rights charity respond?
+
+If you'll forgive me I'm going to start by taking a bit of a personal look back at past evolutions of digital before getting to AI.
+
+As I was preparing for this talk I was reflecting on some of my early encounters with both emerging technologies, and, as it happens, with the UN Convention on the Rights of the Child. As a 17-year old I was a member of the then [Children and Young People's Unit Youth Advisory Board](https://web.archive.org/web/20020222143801/http://www.cypu.gov.uk/corporate/about/further-childrenyoung.cfm) to then Minister for Children and Young People, John Denham.
+
+We were invited to be observers for the periodic review of the UK in front of the Committee on the Rights of the Child, and spend a week in Geneva both attending committee sessions, and meeting groups across the city, including the ILO, UNICEF and others. I was armed with the pocket sized digital camera that my youth service in Havant Borough had found budget for a few weeks before, and was capturing short photos and 30 second videos (all the camera memory could accommodate) and sharing them in proto-blog posts on the website Taking It Global - a platform for young people from across the world to share projects, actions and activities.
+
+In the early 2000s, the web had only been with us for a decade, and we were excited about the potential of this technology to connect us - with peers at home, and young people working on advocacy for children's rights across the globe.
+
+Fast forward a few years, and I was working at the National Youth Agency, developing a research programme on Youth Work and Social Networking: in the era of MySpace, Bebo and early Facebook - where we were exploring both how youth workers could make positive use of emerging social media, but also how they should take account of the impact it was having on young people's lives: recognising that to meet young people where they are, we need to critically understand the information and media landscape they operate in.
+
+From there I turned to the field of open data: not least exploring how far standards and infrastructures for sharing data could streamline access to information on positive activities for young people with a project called Plings: where we constantly navigated the tension between just harvesting the snippets of information we could from leaflets and websites about when football clubs or scout groups were open, with going direct to the sources and providing tools and incentives for group leads to provide up-to-date and accurate information about their activities and clubs, and their capacity for new members to get involved.
+
+At that point - my own journey moved off into work on open government less than youth services - though in my current role at Connected by Data I draw heavily on a commitment to participatory practice rooted in my own experience of projects based on [Article 12 of the UNCRC](https://www.cypcs.org.uk/rights/uncrc/articles/article-12/). However, I start with these reflections about different waves of digital for a couple of reasons.
+
+The early web, social media, open data - and the opportunities and challenges they bring - are all still with us. AI is not so much an evolution that transcends them - but is a layer on top - that interacts with them. And the versions of AI we have with us now are predominantly shaped not by the kind of bottom-up experimentation and distributed logic that drove the early web, but by centralised platforms and silicon valley capital: and we must have our eye open to this. I'd argue that ChatGPT has not been disruptive because of its underlying technology, but because of its interface: a chat box that makes us more forgiving of the limitations of generative AI models - which produce plausible, but not factual, texts and images. The generative AI wave also has hallmarks of a bubble: [Daren Acemoglu's recent paper on the macroeconomics of AI](http://www.nber.org/papers/w32487) offers a compelling challenge to claims of the contribution it could make to growth and productivity - yet governments, corporations and consultancies are selling big claims of its impact.
+
+The question put to us was: Is this the next evolution of digital? and how should children's rights organisations respond? Within AI are some evolutions of digital - and many that also embed certain ongoing evolutions of capitalism - but it's not the whole story.
+
+Regardless, organisations do need to respond to AI: and for the how, the watchwords for me, that we might come back to, are with bounded experimentation, with transparency and accountability, and with a commitment to an inclusive and participatory approach.
+
+## What advantages are there for you personally and your work from the emergence of new applications of AI? How could we harness these?
+
+I mentioned earlier the idea of bounded experiments with AI - finding opportunities to test and benchmark what current tools can and can't bring. The main place I've been doing that in my own work is in preparing for, and writing up, participatory public engagement sessions.
+
+Last year we ran a kind of citizens jury alongside the AI Safety Summit, which we called the [People's Panel on AI](https://connectedbydata.org/projects/2023-peoples-panel-on-ai), and this year we've been running an ongoing panel of members of the public to input into Responsible AI UK's Public Voices on AI project.
+
+Generative AI tools have been useful there in a couple of contexts. Firstly, the members of the public we worked with have asked a couple of times for notes or summaries of presentations or panels we've asked them to observe and engage with. In the past, if I'd not arranged for a note-taking in advance, this would have been a tricky request to meet. But - we found that within 10 minutes, we could go from a YouTube video of a recorded panel, to a transcript, to a reasonable aide memoir summary, in about 15 minutes. I could have said less than 5 minutes - as that's how long the technology took - but we found we often had about 10 minutes work to do tidying up errors in the transcription of names and details - recognising that whilst speech recognition rarely struggled to capture Johns or Junes, it struggled to consistently label Ahmed or Aditya: errors which, if not captured upstream led to misattribution of points or at worse, erasure from the record of their contributions.
+
+The second use we've been making of generative AI is to help us write-up sessions - again recorded and machine transcribed. We've run a couple of blind tests comparing a human written summary to an AI summary - and found the AI summary lacks nuance - and [we lose out on important opportunity to engage with the text more slowly](https://biancawylie.medium.com/automating-summation-on-ai-and-holding-responsibility-in-relationships-642f9f79534c). However, after we've written our first pass pulling out themes and quotes from the transcripts, feeding those transcripts into ChatGPT and having an interactive conversation to see if it suggests areas we've missed, or pulls out different verbatim quotes from the text on given themes, can be a useful way of checking our biases.
+
+So - whilst lots of AI tools are built at the moment to offer you a 'first draft', I think harnessing them as part of developing the second draft is much more interesting and valuable.
+
+I've also been thinking about the AI tools we might not use, but others could be drawing on. After experimenting with getting [NotebookLM](https://notebooklm.google/) to generate a talking heads podcast from a recent report on global deliberation on AI (a copy here if you want to listen!), I've been thinking about how anything we now write might be read via AI-driven summarisation. In the case of our Global Citizen Deliberation on AI report, it appears that, essentially by accident, it was structured in a way that led to a very effective podcast summary. But I've tried other reports which are not handled so well. Without twisting our content in some kind of AI-SEO practice, we may need to think about how our content production both shapes AI models overall, and how well it communicates when mediated by common AI models.
+
+## What are the greatest risks of AI and how should we, as an ethical organisation, take account of these in how we work internally and how we work with others?
+
+I want to focus on two areas in particular. Firstly, responsible data practices. Using an AI system often means combining a pre-trained model, with data that your organisation has collected or holds. When that data involves children and young people, there are extra considerations to take into account. The [Responsible Data for Children](https://rd4c.org/principles/) principles and guidance are an excellent starting point for thinking about this. We need to recognise that AI tools are data processing tools, and call for transparent, accountable and participatory data practices.
+
+Secondly, and to build on the topic of AI bias, I think we need to address the potential biases in AI models when it comes to the representation of children and young people. There has been work [on ecolinguistic bias](https://theh4rmonyproject.org/articles/) in AI models, trained on historical content that does not represent the ecological future we need to create, but I wonder if similar challenges may not apply when it comes to alignment of AI models with children's rights, and the lived experience of children and young people. Models trained on content from the open web are likely to underrepresent content from a child or young person's perspective. Being aware of these specific biases may be particularly important for a children's right organisation.
+
+In the discussion session we also explored questions of governance, and how far charities should be 'early adopters' or whether they should play another role in the changing technology landscape.
+
+We've been thinking a lot about the different perspectives we can take on AI: from a workers perspective, an environmental perspective, a bias perspective, an inclusion perspective and so-on. Effective governance involves not leaving AI to be the domain of one board member or team, but accepting that everyone has a relevant perspective on the [AI elephant](https://en.wikipedia.org/wiki/Blind_men_and_an_elephant), and making sure everyone feels empowered to ask questions and bring their distinct perspectives to the table.
+
+When it comes to the role of charities in engaging with emerging digital tools - I think we need to focus in particular on the collaborative advantage. Individual organisations may have neither the capacity, mandate nor power to substantially shape technologies - but together, we can. So, for children's charities, working with others to call for better AI models that are aligned with lived experience of children and young people may be important. Or it may be important to work together on getting tools implemented in ways that can implement responsible data for children practices. At Connected by Data, in the [Data and AI CSO Network](https://data-and-ai-cso-network.org/) we've been exploring how civil society groups can be stronger together.
+
+## Coda
+
+As this was for an internal workshop, these notes reflect only the inputs I offered to the session. However, I want to note the valuable learning I gained from my fellow panellists, and gratitude to the chair for curating a really interesting discussion.
\ No newline at end of file
diff --git a/_includes/footer.html b/_includes/footer.html
index 8a20e78..ec34cb1 100644
--- a/_includes/footer.html
+++ b/_includes/footer.html
@@ -92,7 +92,7 @@
-
-
+
diff --git a/_people/alan-hudson.md b/_people/alan-hudson.md
index de9becc..3b34946 100644
--- a/_people/alan-hudson.md
+++ b/_people/alan-hudson.md
@@ -1,7 +1,6 @@
---
layout: person
roles:
- - associate
title: Alan Hudson
role: Practice associate
picture: alan.jpeg
diff --git a/_people/libby-young.md b/_people/libby-young.md
index 19a31bf..2ee1006 100644
--- a/_people/libby-young.md
+++ b/_people/libby-young.md
@@ -1,7 +1,6 @@
---
layout: person
roles:
- - associate
- fellow
redirect_from: /associates/libby-young
title: Libby Young
diff --git a/_people/liz-steele.md b/_people/liz-steele.md
index 100578c..94a9bfd 100644
--- a/_people/liz-steele.md
+++ b/_people/liz-steele.md
@@ -1,7 +1,6 @@
---
layout: person
roles:
- - associate
redirect_from: /associates/liz-steele
title: Liz Steele
role: EU Policy & Partnerships associate
diff --git a/_posts/2024-11-14-emily-weeknotes.md b/_posts/2024-11-14-emily-weeknotes.md
new file mode 100644
index 0000000..b0eb89b
--- /dev/null
+++ b/_posts/2024-11-14-emily-weeknotes.md
@@ -0,0 +1,33 @@
+---
+layout: post
+author: Emily Macaulay
+category: weeknotes
+---
+### What I’ve been doing
+We’ve been finalising a few bits as a team recently. Our end of project grant report has been submitted for an 18 month programme of work and two others are planned. We’ve had an unexpected interim report requested and a colleague is getting that finished and we’ve had a couple of bids submitted too. We’ve also found ourselves following up on recent discussions and new relationships that may result in some contract work so concept notes abound.
+
+In early December we’re holding a Design Lab, collaborating with DSIT, on public participation in and around the National Data Library and we’re in early stage pending for another Design Lab in early 2025.
+
+I did some reading / listening on the train and covered:
+* Jeni speaking on the [AI Interrogator podcast](https://www.infosys.com/iki/podcasts/ai-interrogator/community-driven-data-ai.html?soc=smo_ln_podcasts_ai-int-jeni_IKI_html5_05112024_cd19ce1ec7b348a06dd88e5d4443f135_lq_lq#audio-player-part-podbean) (get people involved in decisions about creating and using AI)
+* An [FT article on how AI isn’t working for d/Deaf people](https://www.ft.com/content/10489c19-aecc-43e4-947f-bfde497de7b9?accessToken=zwAGJRCmoeRgkc8QSJwZrsxD5NOUf7_eSX3nuQ.MEQCIHgBjkDEXsRq91e9yQrVFPXPAfTl6Hi9tdgcoh95kfHsAiA_jnQRt3se2v98fLEJoBNF1zGb4TYyqNMmShiyUuGypA&sharetype=gift&token=8ed28d97-0abc-4ef9-af89-67fbe9fb75e3) (reminder that diversity in decisions and understanding is vital, these kind of stories remind me of Tim Harford writing in his book ‘Messy’ about what thinking you miss without diversity)
+* The executive summary of [Ada Lovelace Institute’s recent report](https://www.adalovelaceinstitute.org/report/buying-ai-procurement/) on ‘Buying AI’ (an exploration of procurement structures in local authorities)
+* A [KCL blog by Dr Marcus Weldon](https://www.kcl.ac.uk/reflections-to-live-well-with-ai-technology-transparency-is-the-key) about what intelligence is, and what AI is
+* [Tim’s weeknotes from March 2023](https://connectedbydata.org/weeknotes/2023/03/17/tim-weeknotes) (from before I joined but long one my TBR list, covering ‘doodling data communities’ and other bits)
+
+And some bits on organisational development and operations:
+* A blog from Kuba Bartwicki ([shared on Bluesky](https://bsky.app/profile/kubabartwicki.bsky.social/post/3l3bdfgjj5s2s)) about why [working in the open is “good for you and your teams”](https://www.kubabartwicki.com/posts/working-in-the-open/) (I’m experiencing open working in this Connected by Data team in a way I hadn’t imagined people could work)
+* Catching up on the August edition of [HMRC’s Employer Bulletin](https://www.gov.uk/government/publications/employer-bulletin-august-2024/august-2024-issue-of-the-employer-bulletin)
+* A ‘work life’ blog by Kat Boogaard [about decision fatigue](https://www.atlassian.com/blog/productivity/decision-fatigue#:~:text=The%20basic%20idea%20here%20is,need%20to%20make%20another%20choice.)
+* Home Office [content guide of ‘Designing for individuals with limited English’](https://design.homeoffice.gov.uk/content-style-guide/designing-for-limited-english) (interesting, useful, but also some notable suggestions to engage with intended audiences and if using Google Translate do so consciously)
+
+### What I need to take care of
+I’m on two weeks off starting today so hopefully the answer to this is nothing … thinks … nope, I think I’ve done everything I could think of, got ahead of anything that can be done in advance and left a handover / reference document for the team in case queries come up.
+
+I’ve got a couple of busy days this weekend for my role as [Trustee](https://www.consortium.lgbt/meet-the-board/) of [Consortium](https://www.consortium.lgbt/). It’ll be lovely to be in person with the team and members.
+
+### What I’ve been inspired or challenged or moved by
+I’m getting married in just over a week. I’m getting very excited and squishy inside as we’re finalising details and it is all feeling very real.
+
+### What I’ve been reading
+I got a new library book stack ready for my couple of weeks leave. The first one I’ve picked to read is [“Runner’s High” by Jenni Falconer](https://www.awesomebooks.com/book/9781398720893/runners-high/used). It’s giving me motivation to run more again - which is ideal as we get a bit nippy of weather (I’m a bit of a fair weather runner these days).
\ No newline at end of file
diff --git a/_posts/2024-11-15-tim-weeknotes.md b/_posts/2024-11-15-tim-weeknotes.md
new file mode 100644
index 0000000..ca2fd8f
--- /dev/null
+++ b/_posts/2024-11-15-tim-weeknotes.md
@@ -0,0 +1,52 @@
+---
+layout: post
+author: Tim Davies
+category: weeknotes
+projects:
+---
+
+A busy week with team meeting in London, and getting the [Call for Papers for our Participatory AI Research Symposium](https://pairs25.notion.site) as an unofficial fringe of the Paris AI Action Summit live. In weeknotes this week, some reflections on being an effective oversight group member, and blogged version of a [BlueSky thread](https://bsky.app/profile/timdavies.org.uk/post/3lavfqqcmts2o) looking back on my early experiences of participation in government, sparked by speaking at [a UNICEF UK Digital Week event on AI on Wednesday](https://connectedbydata.org/events/2024-11-13-unicef-uk-digital-week).
+
+## How can I be an effective oversight group member?
+
+This week I took part in the first oversight group meeting for a new [ScienceWise supported](https://sciencewise.org.uk/projects-and-impacts/project-what-we-are-up-to/) public dialogue project on public attitudes towards police use of AI, commissioned by the National Policing Capabilities Unit (NPCU) within the [Home Office](https://www.gov.uk/government/organisations/home-office), and co-managed by the [Home Office's Policy & Innovation Lab (CoLab)](https://hodigital.blog.gov.uk/2020/08/13/what-is-colab-at-ddat-home-office/). It's really interesting to see a user-research and design team engaged in commissioning public dialogue - and a really interesting opportunity for learning about how user-research and public dialogue can interact: a theme that came up in [our evaluation of the Legal Education Foundation's Justice Data Matters dialogues](https://connectedbydata.org/resources/justice-data-matters-2022-evaluation-report).
+
+This is the third recent experience I've had as either an observer of a public dialogue process, or on an oversight group - in addition to our hands-on work at Connected by Data in recent years commissioning, facilitating and evaluating public engagement work, and thinking about governance of participatory processes. As a result, and building on recent conversations with a colleague who has sat with me on another recent oversight/advisory group, I've been reflecting on how I can best approach oversight group roles.
+
+**First sessions matter**. A first oversight group session often involves lots of scene setting and introduction: but it will often be timed just as the scope and shape of a project is being tied down (if it hasn't been already). If there are big framing questions to be asked about the project, or process points that need working out - they need raising right at the start.
+
+In the oversight group session I took part in this week, a chunk of time was given to consultation on the group Terms of Reference. This was a useful opportunity to address questions like whether oversight group members would be invited to observe dialogue sessions, how long would be available for comment and feedback and documents, and whether members can request access to additional materials. Establishing the right balance of group as advisory, and critical friend, feels like it can make the difference between a group that is able to meaningfully contribute to a process, and one that is just a formality.
+
+**Identify who is absent**. Oversight group membership is generally voluntary (i.e. unfunded), and recruited through the networks of the commissioning organisation. To make the most of the critical friend role of an oversight group, its important to have a diversity of participants. After the first I dropped a line to a fellow oversight group members (learning: debrief from calls with a colleague or another group member in order to think about follow-up), and we decided to suggest a few other potential groups who could be considered either for oversight group membership, or to input as experts into the dialogue process. Such suggestions may not always be taken up, but making them feels important - and acts as a reminder to me of the perspectives I should be looking for in draft dialogue materials or expert selection.
+
+**Create space for questions**. Most oversight groups I've been part of now take place on Teams/Zoom, often with 15+ participants, and fairly heavy representation from commissioning organisations and delivery partners, and with members who don't all already know each other. As in many scrutiny or oversight contexts, there's usually lots to present, and then relatively limited time for questions. In the past, I've held back on raising questions (as I generally have quite a few - but don't want to crowd discussion and/or it never feels comfortable to be the first person with a hand-up) - but I've recently been thinking on the kinds of early questions to put that can encourage and create space for others to also bring questions and comments forward. I've also been trying to use the chat channel to make sure I properly log questions that I think should be addressed, as this provides another way to surface areas of shared interest in the oversight group, even if not everything gets discussed on the call.
+
+**Follow up**. On every oversight group I've been part of, convenors have ended calls with an invite to follow-up with them with any comments or questions not addressed during the session. I'm realising I need to book in a half-hour at least after sessions to work on any follow-up: as sending notes afterwards to convenors appears as useful as speaking in the sessions.
+
+There's nothing ground-breaking in these reflections - but useful to write them down as a reminder-to-self if nothing else.
+
+#### Write-up as research and practice
+
+One of the useful side-effects of writing up a project for weeknotes is that looking for contextual links to include (e.g. trying, in this case, unsuccessfully to find a good URL to point you to if you want to understand what the National Policing Capabilities Unit (NCPU) is) can be a good trigger for additional research, both to (a) find out how transparent or legible the bits of the government system I'm engaging with are in practice - and to understand them better; and (b) to discover additional helpful context, such as news of this prior NCPU commissioned [TechUK research into Threats and Opportunities of AI for Policing](https://www.techuk.org/resource/opportunity-home-office-national-policing-capabilities-unit-threats-and-opportunities-of-ai-for-policing.html) (unpublished?), and the existing [Covenant for Using Artificial Intelligence (AI) in Policing](https://science.police.uk/site/assets/files/4682/ai_principles_1_1_1.pdf ) .
+
+Another good reminder (to self) that write-ups really matter.
+
+## Looking back to look forward
+
+Back in the early 2000s, as a teenager, I was a member of the Youth Advisory Forum to the cross-departmental [Children and Young People's Unit](https://web.archive.org/web/20020222143801/http:/www.cypu.gov.uk/corporate/about/further-childrenyoung.cfm) - and yesterday had cause to look back on some notes from it.
+
+Musing on the concept of 'Participation as a service' in relation to DSIT Digital Centre of Government plans, I came across this 2003 interview [in Children and Young People Now](https://www.cypnow.co.uk/content/other/big-interview-youth-participation-society-althea-efunshile-director-children-and-young-peoples-unit)with Althea Efunshile, CYPU director, and I was struck by a couple of things:
+
+(1) Participation as a force of inter-departmental alignment. Getting depts to align around citizen-centric reform is difficult. For CYPU, facilitating direct contact between young people, depts & Ministers was strategic move to keep focus on young people's lives, rather than individual services.
+
+(2) The focus on direct dialogue with Ministers. Perhaps there is something exceptional about C&YP's participation. Under-18s are not (currently) voters - and so lack the political channel the enfranchised in theory have. Yet, voting is an increasingly weak signal to direct policy with.
+
+Is there then a case to re-emphasise the role of the civil service in creating inclusive spaces for dialogue between political leaders and publics: helping shape both the direction and detail of policy at the political level - and not just user-centred implementation detail.
+
+(3) Layers of engagement. The 25-or-so of us under-18s on the standing advisory forum were just one part of CYPU's public engagement. We were sometimes consulted on design or analysis of other engagement activities, but participatory culture in the unit involved many overlapping activities.
+
+(4) Partnership with civil society. I was nominated to the CYPU advisory panel by The Children's Society, and TCS continued to support my engagement there. Other members had links to, and were supported by, other civil society projects and groups.
+
+When recruitment to many participation processes is today either outsourced to firms specialised in market research and/or public dialogue, it's useful to think about the cases where, with the right relationships, civil society should still be a key broker and enabler of diverse public engagement.
+
+In sum: The CYPU's participatory practice was not perfect. Nor are the elements above entirely absent from current thinking on public participation and dialogue with govt. However, I'm left with reflections on how much was lost in years of austerity and how certain forms of professionalised participation and the deliberative wave have perhaps taken us in different directions. There is, I suspect, always some value in looking back in order to look forward.
\ No newline at end of file