-
Notifications
You must be signed in to change notification settings - Fork 230
Caching Proxy: Getting Started
NOTE: It is strongly recommended that you read the Introduction and terminology page before reading this page.
In Microdot, in order to communicate with another service you request it's interface as a dependency-injected constructor argument. This allows Microdot to decide what implementation it provides for the given interface. Usually you'll get an instance of the Service Proxy that translates call to the interface into RPC calls to the remote service. But if the remote service has any methods that are cached, it will provide the Service Proxy wrapped in a Caching Proxy, so you get the benefits of memoization without significant changes to your code. So you don't have to change the way you get the instance of a service interface to start using the Caching Proxy, if the interface has methods marked with [Cached]
then you're already using the Caching Proxy.
Any methods in your interface that satisfy all of the following conditions will have caching enabled:
- The method must have an asynchronous result, meaning it must return a
Task<T>
. If it returns the non-genericTask
or any other type, it will not be memoized. - Caching must not be disabled for the interface where the method is defined. It is possible to disable caching via configuration.
All methods that use caching have subtle changes to the way the data source is accessed called call grouping that may affect the way you observe outgoing calls to services, handling of exceptions, impact of delayed responses and more. Details of call grouping is explained in the appropriate section below.
If the call throws an exception, it won't be cached.
Example:
public interface ICustomerService
{
// This result of this method will be cached (if it didn't throw an exception)
[Cached]
Task<Customer> GetCustomer(long id);
}
If you ignore concurrency, when multiple consecutive calls occur, the first one causes a cache miss and the rest are cache hits. All cache misses cause the data source to be contacted, and all cache hits don't.
But what happens if multiple concurrent calls are dispatched before the the result of any of them is returned? After the first call is dispatched, the Caching Proxy creates a call group for it, and the other concurrent calls will not go out to the data source, but instead join the existing call group and will share the result from the data source that was contacted for that call group. Effectively several concurrent identical cache misses are batched into a group, and only one request will reach the data source.
This has the advantage of making less requests to the data source and not having to deal with which of several (possibly different) results to put in the cache. On the other hand, it might cause you to observe one or more of the following (possibly undesirable) phenomena:
- You make several concurrent requests for a call that you know hasn't been cached yet, but you see only one outgoing call in fiddler.
- Several of your requests get the same exception, even though there was only one failure. That is because an exception that was returned for a call group is propagated to all members of that group.
See the Configuring Service Discovery page for information about that.
If you've written your DTOs and interface correctly, when you service starts up or when the component that request the interface is initialized, you will see a message in the log that looks like this:
INFO Caching has been enabled for an interface.. ConfigName=MyNamespace.ICoolService.CachingPolicy, IsSlidingTtl=False, Ttl=00:10:00
This means that the interface specified above has at least one method that is cached and that someone requested an instance of that interface.
Despite being bundled in Gigya.Microdot.ServiceProxy
NuGet (and having a similar name), the Caching Proxy is actually a completely separate and independent component that has no dependency on the Service Proxy. For service interfaces, the Service Proxy is wrapped in a Caching Proxy automatically, but you can manually wrap any interface and add caching to it (but remember, currently it can only cache async results). This may be useful for interfaces or components that access expensive resources (file system, databases, 3rd party online services, intense computation/hashing/encryption, etc).
Step-by-step instructions forthcoming