-
Notifications
You must be signed in to change notification settings - Fork 30
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow to switch cache strategy #75
base: master
Are you sure you want to change the base?
Conversation
Hum, as per your use case, wouldn't it be better to design a custom cache handler which will dispatch data accross any storage of your choice according to your rules (I assume they're based on endpoint ?) |
Being able to override the value make sense though... |
Any update on this specific change request ? |
Any update on this pull request ? |
@Rakdos8 working on Eseye 3 actually and it will be PSR3, PSR7 and PSR16 compliant. As a result, you'll be able to attach any log library, cache library and http client library with meet with the following :
I'm still waiting for hints from @leonjza regarding the ability to override value after initialization. Maybe @Crypta-Eve can also give some inputs here ? |
I'm mostly prefer to use a specific As far as I understand, I will have to create a custom I may be wrong but here is the suggest pattern you are asking for : class NewCacheManager() {
public function handleCache(string $esiRoute): CacheStrategy {
if ($esiRoute == 'routeA') {
return RedisCache;
} else {
return FileCache;
}
}
} What I'm looking for : class CallThatUseRedis implements AnyEsiJobCall {
public function getCacheStrategy(string $esiRoute): CacheStrategy {
return RedisCache;
}
}
...
class CallThatUseFile implements AnyEsiJobCall {
public function getCacheStrategy(string $esiRoute): CacheStrategy {
return FileCache;
}
}
...
class EsiCallingCode() {
public function callToEsi(array $anyEsiJobCalls) {
foreach ($anyEsiJobCalls as $anyEsiJobCall) {
Config::setCacheStrategy($anyEsiJobCall->getCacheStrategy);
$anyEsiJobCall->call();
}
}
} That way, each implementation will have his very specific strategy. |
Also it allows using multi cURL calls (see #82) by giving specific header and payload for each request, you'll also give a specific way to handle the cache specifically to your request/call/context. |
Is really your stategy targeting the cache rather than the storage ? |
The main reason of why there is several cache handler is due to the storage and maybe the service availability. We don't have a Redis instance everytime I guess. I have indeed a distributed ESI call architecture to split heavy job loads (eg public routes for market, contracts, etc) where it doesn't have a Redis instance in the subnet or network so I'm using the regular FileCache and because it's an heavy job (ESI wise), I only have 1 instance to limit it so it matches correctly with the storage as well (because the handler is also linked to the nature of the storage facility). If I need to scale my first architecture (the heavy one) to several instances, I would mount a NFS drive (or so) to share those file between them.
It can be understood like that, indeed but wouldn't it be clearier if I can specify the right Strategy where it's needed (ie in my ESI Job) rather than to check in this Strategy to know in which case I am to understand what the code does...? |
Any update on this specific change request ? |
So having a look through this, I understand the need you have and do like the ability to have multiple implemented cache storages. However, with that said I am slightly reserved about its implementation as this would increase the coupling between the eseye dependency and the the eveapi package in SeAT. This PR as I currently understand it would require us to implement the abstract method on classes within eveapi, or am I missing something there? My initial reaction is that I do actually somewhat prefer the CacheManager handling the decision making, though perhaps instead of the hardcoded approach method you have there may be some way to pass a 'hint' to the cache manager about how to make a decision on strategy. A kind of hybrid between your two approaches. I am not overly familiar with the architecture of this and still coming up to speed with eseye, however will keep an eye on this as I move forward. Though please don't expect too much from me on this particular PR for some time. |
I had an issue on my ESI calls when some are in Redis cache, and another one is on File.
The main reason of "why I split those calls into separate caches" is because some ESI calls required a huge memory load on Redis (more than 4GB).
I used this code to validate my change :
which currently gives
With the update, it provides