Other packages have had a major version bump in addition to Ash core. While all packages have been changed to refer to domain
instead of api
, they did not receive a major version bump because there were no special breaking changes to account for when using that package. You will also need to factor in the following upgrade guides, if you use those packages.
This section contains each breaking change, and the steps required to address it in your application
If you use Ash.Flow
, include {:ash_flow, "~> 0.1.0"}
in your application.
In 2.0, Ash had a dependency on picosat_elixir
. In 3.0, this is an optional dependency, to help folks handle certain compatibility issues. To upgrade, add {:picosat_elixir, "~> 0.2"}
to your mix.exs
.
The previous name was often confusing as this is an overloaded term for many. To that end, Ash.Api
has been renamed to Ash.Domain
, which better fits our usage and concepts.
To make this change you will need to do two things:
- replace
Ash.Api
withAsh.Domain
in your application - replace places where an
:api
option is passed to a function with the:domain
option. For example,AshPhoenix.Form.for_create(..., api: MyApp.SomeApi)
should now beAshPhoenix.Form.for_create(..., domain: MyApp.SomeDomain)
- Update your application config to define
ash_domains
instead ofash_apis
, eg.config :my_app, ash_domains: [MyApp.MyDomain]
-
code_interface.define_for
is nowcode_interface.domain
. Additionally, it is set automatically if thedomain
option is specified onuse Ash.Resource
. -
domain.execution.timeout
used to default to 30 seconds, but now it defaults to:infinity
. This is because a timeout requires copying memory across process boundaries, and is an unnecessary expense a vast majority of the time. We recommend putting timeouts on specific actions that may need them. -
actions.create.reject
,actions.update.reject
andactions.destroy.reject
have been removed. Blacklisting inputs makes it too easy to make mistakes. Instead, specify an explicitaccept
list. -
relationships.belongs_to.attribute_writable?
no longer makes the underlying attribute both public and writable. It defaults to the value ofwritable?
on the relationship (which itself defaults totrue
), and only controls the generated attributeswritable? true
property. So now, by default, it will betrue
, which is safe when coupled with changes to thedefault_accept
, discussed below. Generally, this means you should be safe to remove any occurrences ofattribute_writable? true
. -
relationships.belongs_to.attribute_public?
has been added, which controls the underlying attribute'spublic?
value. This, similar toattribute_writable?
defaults to thepublic?
attribute of the relationship. -
resource.simple_notifiers
has been removed, in favor of specifying non-DSL notifiers in thesimple_notifiers
option touse Ash.Resource
. -
resource.actions.read.filter
can now be specified multiple times. Multiple filters will be combined withand
.
Ash.Registry
is no longer needed. Place each resource in the domain instead.
resources do
resource Resource1
resource Resource2
end
When calling a calculation with arguments, this is done via passing a keyword list to the calculation, for example: full_name(separator: "")
. In 2.0, keyword lists were not evaluated as part of the expression in the same way as other values, meaning two things:
-
You did not have to pin usage of template functions, i.e
full_name(separator: arg(:separator))
. Now, you will need to do so:full_name(separator: ^arg(:separator))
-
You had to use
expr
to pass an expression to a calculation argument (this only works ifallow_expr? true
is configured on the calculation argument). For example:full_name(separator: expr(sep_1 <> sep_2))
would now befull_name(separator: sep_1 <> sep_2)
If you do not have any expression calculations that accept arguments, you likely need to do nothing. To make these changes, you will need to look at each place you build an expression that you may be calling a calculation w/ arguments, i.e Ash.Query.filter
, and the expression
callback in Ash.Calculation
, and see if they must be modified as described above.
Ash.Policy.FilterCheck
and Ash.Policy.FilterCheckWithContext
have been combined into Ash.Policy.FilterCheck
. If you have any usages of FilterCheckWithContext
, you'll need to change that to FilterCheck
. If you have usages of FilterCheck
, you will need to add the context
arguments to the callbacks. Compiler warnings will show you what callbacks mismatch.
Ash.Filter.parse/5
is now Ash.Filter.parse/3
. Ash.Filter.parse_input/5
is now Ash.Filter.parse_input/2
The third and fourth optional arguments are unnecessary and were previously ignored, and the fifth argument is not necessary for parse_input
.
Ash.Filter.used_aggregates/3
no longer accepts :all
as a relationship path, instead using :*
. Its very unlikely that this is used in your application.
Tools for templating expressions were previously in Ash.Filter.TemplateHelpers
. This often led to confusion because it was a hard to remember module name, and didn't really make sense to be separate from the rest of our utilities. Now, all the functions/macros you need for expressions are in Ash.Expr
. This means that in any given file where you want to work with expressions, you only need to do import Ash.Expr
. Additionally, this import Ash.Expr
has been added to changes, preparations, validations and calculations automatically.
Ash.CiString.new(nil)
now returnsnil
instead of%Ash.CiString{value: nil}
validate/2
is now validate/3
, with the third argument being the context of the validation.
The function signature of Ash.Query.Calculation.new
has been changed. We use an options list over optional arguments, and now require constraints to be provided. You will need to adjust your calls to this function.
This module has been renamed to Ash.Resource.Calculation
. You will need to rename your references to it.
Ash.Query.to_query has been removed. Use Ash.Query.new
instead.
Ash.Query.expr has been removed. Use Ash.Expr.expr
instead.
first
and list
aggregates have a new option called include_nil?
, which defaults to false. You may need to add include_nil?: true
to your resource aggregates if you wish to retain the old behavior.
The format for sorting on calculations that take input has been swapped. Previously, you would use sort(calculation: {:desc, %{arg: :value}})
, but for the sake of consistency, you now use sort(calculation: {%{arg: :value}, :desc})
.
Ash.Changeset.new/2
has been removed. Ash.Changeset.new/1
is still available for creating a new changeset, but attributes and arguments should, with few exceptions, be passed to the relevant Ash.Changeset.for_<action_type>
functions, not to Ash.Changeset.new/2
. Removing the second argument helps clarify the purpose of Ash.Changeset.new/1
.
Ash.Changeset.after_transaction/2
can no longer be called from within other lifecycle hooks. We need to know whether or not an after action hook, before we start processing any hooks.
Ash.Changeset.manage_relationship/4
no longer uses :all
to signal that all changes will be sent to the join relationship. Instead, use :*
.
Ash.Changeset.filter
now accepts expressions. The value of the filter is no longer a simple equality map, but rather a regular Ash expression. We add to it on successive calls to Ash.Changeset.filter
. Additionally, this value is stored in changeset.filter
instead of changeset.filters
.
Ash.Policy.FilterCheck
and Ash.Policy.FilterCheckWithContext
have been combined. The name is Ash.Policy.FilterCheck
, but the callbacks take the extra arguments present in Ash.Policy.FilterCheckWithContext
.
The functions provided to after_action/1
, after_transaction/1
, before_transaction/1
and before_action/1
must all now take an additional argument, which is the change context.
For example,
change after_action(fn changeset, result -> ... end)
is now
change after_action(fn changeset, result, context -> ... end)
This is true for both preparations and changes.
Previously, in expressions, you could say expr(ref(^some_atom))
. This is a tool for building dynamic references, but it was an exception to the standard pattern of prefixing "external" things in an expression, i.e arg
with ^
. Now, you must do the same with ref/1
and ref/2
. You will need to search for ref(
in your application, and ensure that if it is inside of an expression you have prefixed it with ^
. The original example becomes: expr(^ref(some_atom))
.
Usage of def_ash_error/2
will show you what to change in its warnings.
Instead of combining def_ash_error
with defimpl Ash.ErrorKind
, you create a custom error like so:
defmodule MyCustomError do
use Splode.Error, class: :invalid, fields: [:foo, :bar]
def message(error) do
"Message: #{error.foo} - #{error.bar}"
end
end
When sorting or filtering, if a field is not found, an Ash.Query.Error.NoSuchField
is used, where it would have previously been an Ash.Query.Error.NoSuchAttribute
. This was wrong as sometimes the field reference was not an attribute. Places that would previously return Ash.Query.Error.NoSuchAttributeOrRelationship
now return Ash.Query.Error.NoSuchField
as well.
Additionally, the following exceptions have had keys remapped:
NoSuchAttribute
: name
-> attribute
NoSuchRelationship
: name
-> relationship
NoSuchFunction
: name
-> function
NoSuchOperator
: name
-> operator
In 2.0, a set of features allowed storing the actor, tenant and context in the process dictionary. There were fundamental issues with this pattern that manifested in subtle bugs. We suggest making this change before you upgrade, as this change can be made and verified without upgrading to 3.0.
You need to manually thread through your tenant, actor, and context values wherever you were using Ash.set_*
. For example:
Ash.set_actor(current_user)
Ash.set_tenant(current_tenant)
Ash.Changeset.for_create!(..)
Ash.Query.for_read(..)
would become
Ash.Changeset.for_create!(.., tenant: current_tenant, actor: current_user)
Ash.Query.for_read(.., tenant: current_tenant, actor: current_user)
In order to honor rules on the Domain
module about authorization and timeouts, we have to know the Domain
when building the changeset.
The domain for the calls to embedded resources is gotten from the parent changeset. No need to change them at all. a domain
constraint has been added in case you wish to make a given embedded resource use a specific domain always.
For example:
attribute :bio, MyApp.Bio do
constraints domain: MyApp.SomeDomain
end
While it is possible for resources to be used with multiple domains, it almost never happens in practice. Any resources that are only used from a single domain only (not including embedded resources) should be modified to have a domain
option specified in their call to use Ash.Resource
. For example:
use Ash.Resource,
domain: MyApp.MyDomain
Calling functions on the domain has been deprecated. You must now use the functions defined in the Ash
module to interact with your resources. They are the same as what was previously available in your domain module. For example:
MyDomain1.create!(changeset)
MyDomain2.read!(query)
MyDomain3.calculate!(...)
can now be written as
Ash.create!(changeset)
Ash.read!(query)
Ash.calculate!(query)
This makes refactoring resources easier, as you no longer need to change the call site, it remains the same regardless of what Domain a resource is in.
For these, you will need to include the domain
option when you construct a changeset.
For example:
MyResource
|> Ash.Changeset.for_create(:create, input, domain: MyApp.MyDomain)
For more context, see the original discussion: ash-project#512
In 2.0, all public, writable attributes were accepted by each action by default. This made it very easy to accidentally expose writing to an attribute in an action where that was not the intent. Additionally, new attributes added were automatically writable across a wide array of actions, which was error prone for the same reason.
In 2.0, as well as 3.0, there is an option called default_accept
, which modifies all actions that do not have an accept
list. In 2.0, the default value for default_accept
was "all public, writable attributes". In 3.0, the default value for default_accept
is []
. This encourages a pattern of explicitly listing inputs to actions, and is safer and less error prone.
For those who want to upgrade, you would use the new :*
option to default_accept
(also usable in an action's accept
option) to accept all public attributes. Go to each resource and, inside the actions block, add:
actions do
default_accept :*
...
end
Then mark the attributes and relationships you want to accept as public?: true
(see this section for more information on this change).
For those who want to be more explicit, or after your upgrade has complete if you wish to refactor existing resources and actions, the general best path forward is to copy the default_accept
into each action (or put it in a module attribute and reference it) as the accept
option. This way when a new action is added, it does not "inherit" some list of accepted attributes.
In 2.0, accepting a private attribute as a change required adding an argument with the same name, and using
change set_attribute(...)
. Now that we require explicit accept lists, you can place private attribtues in that list, which will allow them to be written to (but not read back).
The change to explicit accepts also included a change that defaults belongs_to attributes to
writable?: true
andpublic?: false
. You may want to addattribute_writable?: false
to your belongs_to relationships if you are addingdefault_accept :*
and don't currently haveattribute_writable?: true
on them currently.
For example:
defaults [:read, :destroy, create: :*, update: :*]
In 2.0, if you have :read
in your default actions list, it would generate an action like this:
read :read do
primary? true
end
Now, it generates an action like this:
read :read do
primary? true
pagination [keyset?: true, offset?: true, countable: true, required?: false]
end
For most cases, this won't affect you. However, if you are using AshGraphql
, and have any queries connected to a default :read
action, it will default to making those queries paginatable with keyset pagination. To keep the old behavior, you will need to add paginate_with nil
to the query, for example:
graphql do
queries do
list :list_things, :read, paginate_with: nil
end
end
In Ash 2.0, before_action
and before_transaction
hooks that were added to a changeset were prepended to the list of hooks by default. These hooks were then run in order. What this meant is that, given an action like the following:
create :foo do
change before_action(fn changeset, _context ->
IO.puts("first")
changeset
end)
change before_action(fn changeset, _context ->
IO.puts("second")
changeset
end)
end
You would see second
printed before first
.
In many cases, this won't matter to you. However, if you have a situation where the order of your before action/transaction hooks matters, you can do one of two things:
- reorder the changes that add those before action/transaction hooks
- use the
:prepend
option toAsh.Changeset.before_action/2
andAsh.Changeset.before_transaction/2
to explicitly prepend the hook to the list of hooks
To help make it clear what keys are available in the context provided to callbacks on these modules, they have been adjusted to provide a struct instead of a map
. This helps avoid potential ambiguity, and
acts as documentation.
If you are using something like Keyword.new(context)
to generate options to pass into an action, change that to Ash.Context.to_opts(context)
.
Per the above change, we have specified the values available in the context of a calculation, with Ash.Resource.Calculation.Context
. In Ash 2.0, context was merged with arguments, which was problematic in various ways. Now, arguments are in context.arguments
.
You will need to update your module-backed calculations to account for this.
def calculate(records, _opts, context) do
Enum.map(records, fn record ->
record.first_name <> context.delimiter <> record.last_name
end)
end
would need to be adjusted to access arguments in the context:
def calculate(records, _opts, %{arguments: arguments}) do
Enum.map(records, fn record ->
record.first_name <> arguments.delimiter <> record.last_name
end)
end
There is no longer a private?
option for attributes, relationships, calculations and aggregates. Instead of attributes defaulting to private?: false
, they now default to public?: false
. It was too easy to add an attribute and not realize that you had exposed it over your api.
If you are using api extensions (i.e AshGraphql
and AshJsonApi
), you will need to go to your resources and "invert" the definitions. i.e remove private?: true
and add public?: true
to every other attribute, relationship and calculation. Don't forget the relationships and calculations!
The above includes embedded resources as well! Don't forget to make sure that all fields on your embedded resources are also marked as
public?: true
(if applicable). The goal here is to have a clear visual indicator of what in your application can be shown publically.
Previously, anonymous function calculations were special cased to operate on a single record. For consistency, these anonymous functions now take the list of records.
Update any anonymous function calculations to take and return a list, for example:
calculate :full_name, :string, fn record, _context ->
record.first_name <> " " <> record.last_name
end
would become
calculate :full_name, :string, fn records, _context ->
# note, you can also return `{:ok, list}` or `{:error, error}`
Enum.map(records, fn record ->
record.first_name <> " " <> record.last_name
end)
end
In 2.0 relationship loads from the load/3
callback in a calculation will select all fields of that relationship and make them available to the calculation.
For example, the following calculation load/3
callback expresses a dependency on all fields of the relationship :relationship
.
def load(_, _, _) do
[:relationship]
end
In 3.0, relationship dependencies alone will only make the related primary keys available. You now need to select explicit fields that you want to use in your calculation, for example:
def load(_, _, _) do
[relationship: [:field1, :field2]]
end
Each calculation can still opt into the old behavior by adding the callback strict_loads/0
and returning false
.
def load(_, _, _) do
[:relationship]
end
def strict_loads, do: false
In 2.0 calculations had a select/3
callback, but load/3
is now a superset of select/3
and so the former is no longer needed.
If you have a select/3
callback in your calculations, you will need to remove the select/3
callback. You must then add those fields to the load/3
callback.
For example:
def select(_, _, _), do: [:some_attribute]
def load(_, _, _), do: [:some_calculation, some_relationship: [:some_field1, :some_field2]]
can now be written more simply as:
def load(_, _, _), do: [:some_attribute, :some_calculation, some_relationship: [:some_field1, :some_field2]]
A private primary key called autogenerated_id
was added to embedded resources if no primary key was added manually.
This should have no real effect on your application, except for the fact that your embedded attributes will have autogenerated_id
in the database which won't be reflected by an attribute any more. If it is updated, then the autogenerated_id
field will go away.
This is listed as a breaking change in case someone is depending on this feature, but that should be very uncommon/unlikely.
Previously, the Ash notifier would publish a message containing both the old and new values for changing attributes. Typically, we use things like IDs in notification topics, that do not change, so for most this will not have an impact.
If you wish to send a notification for the old value and the new value, then an action cannot be done atomically. Bulk actions must update each record in turn, and atomic updates can't be leveraged.
If you're comfortable with the performance implications, you can restore the previous behavior by addding previous_values?: true
to your publications in your pub_sub notifier
publish :update, ["user:updated", :email], previous_values?: true
In your notifiers and policy checks, when you get a changeset you currently have access to the data
field,
which is the original record prior to being updated or destroyed. However, this is not compatible with atomic/bulk
updates/destroys, where we may be given a query and told to destroy it. In those cases, changeset.data
will be
%Ash.Changeset.OriginalDataNotAvailable{}
. When you write a custom check or a custom notifier, if you need access to the original data, you must add the following function:
# in custom checks
def requires_original_data?(_authorizer, _opts), do: true
# in notifiers
def requires_original_data?(_resource, _action), do: true
Keep in mind, this will prevent the usage of these checks/notifiers with atomic actions.
Previously, the default was :when_requested
. This meant that, unless you said actor: some_actor
or authorize?: true
, authorization was skipped. This has the obvious drawback of making it easy to accidentally bypass authorization unintentionally. In 3.0, this now defaults to :by_default
.
To avoid making a significant refactor, and to keep your current behavior, you can go to your domain and set the configuration below. Otherwise skip to the refactor steps below. We advise that you take this route to start, but we highly suggest that you change your domains to authorize :by_default
in the future. authorize :when_requested
will not be deprecated, so there is no time constraint.
authorization do
authorize :when_requested
end
For each domain that has the old configuration, after setting it to the new config, you'll need to revisit each call to that domain that doesn't set an actor or the authorize?
option, and add authorize?: false
.
This may be a good time to do the refactor from YourDomain.func
to Ash.func
, if you want to. See the section about domains being required when building changesets.
On :update
actions, and :destroy
actions, they now default to require_atomic? true
. This means that the following things will cause errors when attempting to run the action:
- changes or validations exist that do not have the
atomic
callback. This includes anonymous function changes/validations. - attributes are being changed that do not support atomic updates. This most notably includes (for now) embedded resources.
- the action has a manual implementation
- the action has applicable notifiers that require the original data.
Updates and destroys that can be made fully atomic are always safe to do concurrently, and as such we now require that actions meet this criteria, or that it is explicitly stated that they do not have to. See the update actions guide for more.
You can set the following configuration, which will be removed in Ash 3.1. This configuration will retain the 2.0 default behavior of require_atomic?
defaulting to false
. You can then safely do the rest of the upgrade. Then, you can perform this one change after confirming that your system works as expected.
config :ash, :require_atomic_by_default?, false
The vast majority of cases will be caught by warnings emitted at compile time.
Anonymous function changes can never be made atomic, because we don't know what they contain. You will either need to transfer it to a module change and then follow the next section, or set require_atomic? false
If you have a module change, you can make it atomic by defining the atomic/3
callback. This callback can replace the change/3
callback, but it is very important to keep in mind that later changes will no longer have access to the value. For example, if you have
def change(changeset, _, _) do
# this is not concurrency safe
Ash.Changeset.change_attribute(changeset, :value, changeset.data.value + 1)
end
If you have a subsequent change that does something like Ash.Changeset.get_attribute(changeset, :value)
it will get the new value (i.e old value + 1). With atomics, Ash.Changeset.get_attribute(changeset, :value)
would return the old value. This is because atomics are scheduling an update that happens when call the data layer. For example:
def atomic(changeset, _, _) do
{:atomic, %{value: expr(value + 1)}}
end
This should not typically matter unless you have complex actions w/ multiple changes where subsequent changes need to know the results of previous steps. In those cases, if you can't make them all atomic, then its best just not to worry about it and set require_atomic? false
If you are using change atomic_update/2
or Ash.Changeset.atomic_update/2
or Ash.Changeset.atomic_update/3
, and the type does not support atomic updates, you will get an error unless you do one of the following:
- for
change atomic_update/2
add thecast_atomic?: false
option. - for
Ash.Changeset.atomic_update
, pass the value as{:atomic, expr}
, i.eAsh.Changeset.atomic_update(changeset, :value, {:atomic, expr(value + 1)})
For builtin types, the above applies to :union
, :map
, :keyword
, embedded types. It also applies to :string
, but only if the match?
constraint is present.
In 2.0, inputs to actions that don't match an accepted attribute or argument were silently ignored. This made it very easy to make certain kinds of mistakes, like assuming that an input is being used by an action when it actually is not. Now, unknown action inputs will cause an Ash.Error.Invalid.NoSuchInput
.
If you have action calls that are erroneously passing in extra values, you will need to do remove them.
A logic error was fixed in this behavior for embedded resources. If you are using embedded resources in {:array, _}
types, and are relying on including the primary key of that embedded resource to match records up for updating/destroy behavior, you will need to make sure that you do one of the following
- add the
writable?: true
flag to the uuid of the embedded resource (probably what you want) - modify the actions to accept an
id
argument and set the argument to the provided value
In 2.0, attributes that were not selected were replaced with nil
values. This could lead to confusion when dealing with records that didn't have all attributes selected. If you passed these records to a function it might see that an attribute is nil
when actually it just wasn't selected. To find out if it was selected, you could look into record.__metadata__.selected
, but you'd have to know to do that. To alleviate these issues, attributes that are not selected are now filled in with %Ash.NotLoaded{}
, just like calculations and aggregates.
If you have logic that was looking at attribute values that may not be selected, you may have been accidentally working with non selected values. For example:
if record.attribute do
handle_present_attribute(...)
else
# unselected attributes would have ended up in this branch
handle_not_present_attribute(...)
end
Now, if it is possible for that attribute to have not been selected, you'll want to do something like this instead:
case record.attribute do
%Ash.NotLoaded{} ->
handle_not_selected(...)
nil ->
handle_not_present_attribute(...)
value ->
handle_present_attribute(...)
end
When loading data in 2.0 the option reselect_all?
defaulted to false
. What this would mean is that existing values for attributes would be reused, instead of visiting the data layer, by default. This can be an extremely valuable piece of behavior, but defaulting to it often means accidentally using data as a cache that you did not intent to use as a cache. Take the following example:
user = %User{first_name: "fred", last_name: "weasley"}
Ash.update!(user, first_name: "george")
user |> Ash.load!(:full_name)
# in 2.0 -> fred weasley
# in 3.0 -> george weasley
To opt into the old behavior, which we recommend doing on a case-by-case basis, you can pass reuse_values?: true
. For example:
user |> Ash.load!(:full_name, reuse_values?: true)
In 2.0 it was possible to pass an Ash resource in all places where some instance of Ash.Type
was supported. In 3.0 resources (except for embedded resources) don't implement the Ash.Type
behaviour anymore.
If you were using a resource in one of the places that accept an Ash.Type
(arguments, calculation return values or fields of a union) you have to refactor your code to use the :struct
type together with an instance_of
constraint
calculation :random_post, :struct do
constraints instance_of: Post
calculate Calculations.RandomPost
end