Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to switch cache strategy #75

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

Rakdos8
Copy link
Contributor

@Rakdos8 Rakdos8 commented Sep 5, 2021

I had an issue on my ESI calls when some are in Redis cache, and another one is on File.
The main reason of "why I split those calls into separate caches" is because some ESI calls required a huge memory load on Redis (more than 4GB).

I used this code to validate my change :

		$config = Configuration::getInstance();
		$config->logfile_location = '/tmp';
		$config->file_cache_location = '/tmp';

		$config->cache = FileCache::class;
		var_dump(get_class($config->getCache()));
		echo '<hr>';
		$config->cache = RedisCache::class;
		var_dump(get_class($config->getCache()));
		die;

which currently gives

string(26) "Seat\Eseye\Cache\FileCache"
string(26) "Seat\Eseye\Cache\FileCache"

With the update, it provides

string(26) "Seat\Eseye\Cache\FileCache"
string(27) "Seat\Eseye\Cache\RedisCache"

@warlof
Copy link
Member

warlof commented Sep 10, 2021

Hum, as per your use case, wouldn't it be better to design a custom cache handler which will dispatch data accross any storage of your choice according to your rules (I assume they're based on endpoint ?)

@warlof
Copy link
Member

warlof commented Dec 3, 2021

Being able to override the value make sense though...
@leonjza need hint here please

@Rakdos8
Copy link
Contributor Author

Rakdos8 commented Apr 27, 2022

Any update on this specific change request ?

@Rakdos8
Copy link
Contributor Author

Rakdos8 commented Aug 5, 2022

Any update on this pull request ?

@warlof
Copy link
Member

warlof commented Aug 26, 2022

@Rakdos8 working on Eseye 3 actually and it will be PSR3, PSR7 and PSR16 compliant.

As a result, you'll be able to attach any log library, cache library and http client library with meet with the following :

I'm still waiting for hints from @leonjza regarding the ability to override value after initialization.
However, I wonder if it will be still applicable as you'll be able to implement your own cache driver in cache of load dispatch between file or redis (or even blob ?).

Maybe @Crypta-Eve can also give some inputs here ?

@Rakdos8
Copy link
Contributor Author

Rakdos8 commented Sep 15, 2022

I'm mostly prefer to use a specific CacheStrategy which is choosen on the context where it's called.

As far as I understand, I will have to create a custom CacheManager which will have weird and out of context condition to know if, according to the route, will have to be either A or B. This, is called a Stragegy, specifically a Pattern Strategy.

I may be wrong but here is the suggest pattern you are asking for :

class NewCacheManager() {
    public function handleCache(string $esiRoute): CacheStrategy {
        if ($esiRoute == 'routeA') {
            return RedisCache;
        } else {
            return FileCache;
        }
    }
}

What I'm looking for :

class CallThatUseRedis implements AnyEsiJobCall {
    public function getCacheStrategy(string $esiRoute): CacheStrategy {
        return RedisCache;
    }
}

...

class CallThatUseFile implements AnyEsiJobCall {
    public function getCacheStrategy(string $esiRoute): CacheStrategy {
        return FileCache;
    }
}

...

class EsiCallingCode() {
    public function callToEsi(array $anyEsiJobCalls) {
        foreach ($anyEsiJobCalls as $anyEsiJobCall) {
            Config::setCacheStrategy($anyEsiJobCall->getCacheStrategy);
            $anyEsiJobCall->call();
        }
    }
}

That way, each implementation will have his very specific strategy.
The current Singleton Pattern used for the Configuration doesn't allow to bring new feature and blocks every evolution of it.
More details about this anti-pattern :

@Rakdos8
Copy link
Contributor Author

Rakdos8 commented Sep 15, 2022

Also it allows using multi cURL calls (see #82) by giving specific header and payload for each request, you'll also give a specific way to handle the cache specifically to your request/call/context.

@warlof
Copy link
Member

warlof commented Sep 25, 2022

Is really your stategy targeting the cache rather than the storage ?
To my opinion, it's not the case and what you want to implement is a cache handler which is using a strategy to determine the storage in which it will manage the requested entry.

@Rakdos8
Copy link
Contributor Author

Rakdos8 commented Sep 27, 2022

The main reason of why there is several cache handler is due to the storage and maybe the service availability. We don't have a Redis instance everytime I guess.
Even though I'm actually looking to target the right cache handler according to my distributed ESI call architecture and not the storage.

I have indeed a distributed ESI call architecture to split heavy job loads (eg public routes for market, contracts, etc) where it doesn't have a Redis instance in the subnet or network so I'm using the regular FileCache and because it's an heavy job (ESI wise), I only have 1 instance to limit it so it matches correctly with the storage as well (because the handler is also linked to the nature of the storage facility).
On other ESI calls (such as wallet, skills, etc), I'm using a distributed achitecture with a Redis instance because I have one on this subnet/network and it can be reached as well. Also, the Redis allows to plug several instances and share the cache with all of my instances even if I could share the same file path to those instances. The way to store data is not the pain point here.

If I need to scale my first architecture (the heavy one) to several instances, I would mount a NFS drive (or so) to share those file between them.
At the very first time I've added another Redis instance for it but my Raspberry Pi failed to handled everything (CPU and memory wise) that's why I changed the strategy back to FileCache.

To my opinion, it's not the case and what you want to implement is a cache handler which is using a strategy to determine the storage in which it will manage the requested entry.

It can be understood like that, indeed but wouldn't it be clearier if I can specify the right Strategy where it's needed (ie in my ESI Job) rather than to check in this Strategy to know in which case I am to understand what the code does...?
Running a gorgeous switch/case on a string (which is basically the instance's hostname and/or the ESI URL path) isn't what I can call "reliable".

@Rakdos8
Copy link
Contributor Author

Rakdos8 commented Nov 28, 2022

Any update on this specific change request ?

@Crypta-Eve
Copy link
Member

So having a look through this, I understand the need you have and do like the ability to have multiple implemented cache storages.

However, with that said I am slightly reserved about its implementation as this would increase the coupling between the eseye dependency and the the eveapi package in SeAT. This PR as I currently understand it would require us to implement the abstract method on classes within eveapi, or am I missing something there? My initial reaction is that I do actually somewhat prefer the CacheManager handling the decision making, though perhaps instead of the hardcoded approach method you have there may be some way to pass a 'hint' to the cache manager about how to make a decision on strategy. A kind of hybrid between your two approaches.

I am not overly familiar with the architecture of this and still coming up to speed with eseye, however will keep an eye on this as I move forward. Though please don't expect too much from me on this particular PR for some time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants