A global look-ahead acquisition function.
Args:
- model (GPyTorchModel): The gpytorch model to use.
+ lb (Tensor): Lower bounds of the input space, used to generate the query set (Xq).
+ ub (Tensor): Upper bounds of the input space, used to generate the query set (Xq).
+ model (GPyTorchModel): The gpytorch model. lookahead_type (Literal["levelset", "posterior"]): The type of look-ahead to perform (default is "levelset"). - If the lookahead_type is "levelset", the acqf will consider the posterior probability that a point is above or below the target level set. - If the lookahead_type is "posterior", the acqf will consider the posterior probability that a point will be detected or not. target (float, optional): Threshold value to target in p-space.
- posterior_transform (PosteriorTransform, optional): Optional transformation to apply to the posterior.
- query_set_size (int, optional): Number of points in the query set.
- Xq (torch.Tensor, optional): (m x d) global reference set.
+ posterior_transform (PosteriorTransform, optional): Posterior transform to use. Defaults to None.
+ query_set_size (int, optional): Size of the query set. Defaults to 256.
+ Xq (Tensor, optional): (m x d) global reference set. Defaults to None. """super().__init__(model=model,target=target,lookahead_type=lookahead_type)self.posterior_transform=posterior_transform
@@ -300,7 +305,7 @@
Source code for aepsych.acquisition.lookahead
assert int(query_set_size)==query_set_size# make sure casting is safe# if the asserts above pass and Xq is None, query_set_size is not None so this is safequery_set_size=int(query_set_size)# cast
- Xq=make_scaled_sobol(model.lb,model.ub,query_set_size)
+ Xq=make_scaled_sobol(lb,ub,query_set_size)self.register_buffer("Xq",Xq)@t_batch_mode_transform(expected_q=1)
@@ -353,8 +358,10 @@
Args:
model (GPyTorchModel): The gpytorch model to use.
- lookahed_type (str): The type of look-ahead to perform (default is "levelset").
+ lookahead_type (Literal["levelset", "posterior"]): The type of look-ahead to perform (default is "levelset").
+ - If the lookahead_type is "levelset", the acqf will consider the posterior probability that a point is above or below the target level set.
+ - If the lookahead_type is "posterior", the acqf will consider the posterior probability that a point will be detected or not. target (float, optional): Threshold value to target in p-space. query_set_size (int, optional): Number of points in the query set. Xq (torch.Tensor, optional): (m x d) global reference set.
@@ -373,6 +382,8 @@
Source code for aepsych.acquisition.lookahead
lookahead_type =="levelset"),f"ApproxGlobalSUR only supports lookahead on level set, got {lookahead_type}!"super().__init__(
+ lb=lb,
+ ub=ub,model=model,target=target,lookahead_type=lookahead_type,
@@ -449,8 +460,10 @@
) ->None:""" model (GPyTorchModel): The gpytorch model to use.
- lookahead_type (str): The type of look-ahead to perform (default is "posterior").
+ lookahead_type (Literal["levelset", "posterior"]): The type of look-ahead to perform (default is "posterior").
+ - If the lookahead_type is "levelset", the acqf will consider the posterior probability that a point is above or below the target level set.
+ - If the lookahead_type is "posterior", the acqf will consider the posterior probability that a point will be detected or not. target (float, optional): Threshold value to target in p-space. Default is None. query_set_size (int, optional): Number of points in the query set. Default is 256. Xq (torch.Tensor, optional): (m x d) global reference set. Default is None.
@@ -466,6 +481,8 @@
Args:
model (GPyTorchModel): The gpytorch model to use. training_data (None): Placeholder for compatibility; not used in this function.
- lookahead_type (str): Type of look-ahead to perform. Default is "levelset".
+ lookahead_type (Literal["levelset", "posterior"]): The type of look-ahead to perform (default is "levelset").
+ - If the lookahead_type is "levelset", the acqf will consider the posterior probability that a point is above or below the target level set.
+ - If the lookahead_type is "posterior", the acqf will consider the posterior probability that a point will be detected or not. target (float, optional): Target threshold value in probability space. Default is None. posterior_transform (PosteriorTransform, optional): Optional transformation to apply to the posterior. Default is None. query_set_size (int, optional): Number of points in the query set. Default is 256.
diff --git a/api/_modules/aepsych/acquisition/lookahead/index.html b/api/_modules/aepsych/acquisition/lookahead/index.html
index 5aacf4d40..28876d0d4 100644
--- a/api/_modules/aepsych/acquisition/lookahead/index.html
+++ b/api/_modules/aepsych/acquisition/lookahead/index.html
@@ -34,6 +34,7 @@
Source code for aepsych.acquisition.lookahead
from botorch.models.gpytorchimportGPyTorchModelfrombotorch.utils.transformsimportt_batch_mode_transformfromscipy.statsimportnorm
+fromtorchimportTensorfrom.lookahead_utilsimport(approximate_lookahead_levelset_at_xstar,
@@ -263,6 +264,8 @@
A global look-ahead acquisition function.
Args:
- model (GPyTorchModel): The gpytorch model to use.
+ lb (Tensor): Lower bounds of the input space, used to generate the query set (Xq).
+ ub (Tensor): Upper bounds of the input space, used to generate the query set (Xq).
+ model (GPyTorchModel): The gpytorch model. lookahead_type (Literal["levelset", "posterior"]): The type of look-ahead to perform (default is "levelset"). - If the lookahead_type is "levelset", the acqf will consider the posterior probability that a point is above or below the target level set. - If the lookahead_type is "posterior", the acqf will consider the posterior probability that a point will be detected or not. target (float, optional): Threshold value to target in p-space.
- posterior_transform (PosteriorTransform, optional): Optional transformation to apply to the posterior.
- query_set_size (int, optional): Number of points in the query set.
- Xq (torch.Tensor, optional): (m x d) global reference set.
+ posterior_transform (PosteriorTransform, optional): Posterior transform to use. Defaults to None.
+ query_set_size (int, optional): Size of the query set. Defaults to 256.
+ Xq (Tensor, optional): (m x d) global reference set. Defaults to None. """super().__init__(model=model,target=target,lookahead_type=lookahead_type)self.posterior_transform=posterior_transform
@@ -300,7 +305,7 @@
Source code for aepsych.acquisition.lookahead
assert int(query_set_size)==query_set_size# make sure casting is safe# if the asserts above pass and Xq is None, query_set_size is not None so this is safequery_set_size=int(query_set_size)# cast
- Xq=make_scaled_sobol(model.lb,model.ub,query_set_size)
+ Xq=make_scaled_sobol(lb,ub,query_set_size)self.register_buffer("Xq",Xq)@t_batch_mode_transform(expected_q=1)
@@ -353,8 +358,10 @@
Args:
model (GPyTorchModel): The gpytorch model to use.
- lookahed_type (str): The type of look-ahead to perform (default is "levelset").
+ lookahead_type (Literal["levelset", "posterior"]): The type of look-ahead to perform (default is "levelset").
+ - If the lookahead_type is "levelset", the acqf will consider the posterior probability that a point is above or below the target level set.
+ - If the lookahead_type is "posterior", the acqf will consider the posterior probability that a point will be detected or not. target (float, optional): Threshold value to target in p-space. query_set_size (int, optional): Number of points in the query set. Xq (torch.Tensor, optional): (m x d) global reference set.
@@ -373,6 +382,8 @@
Source code for aepsych.acquisition.lookahead
lookahead_type =="levelset"),f"ApproxGlobalSUR only supports lookahead on level set, got {lookahead_type}!"super().__init__(
+ lb=lb,
+ ub=ub,model=model,target=target,lookahead_type=lookahead_type,
@@ -449,8 +460,10 @@
) ->None:""" model (GPyTorchModel): The gpytorch model to use.
- lookahead_type (str): The type of look-ahead to perform (default is "posterior").
+ lookahead_type (Literal["levelset", "posterior"]): The type of look-ahead to perform (default is "posterior").
+ - If the lookahead_type is "levelset", the acqf will consider the posterior probability that a point is above or below the target level set.
+ - If the lookahead_type is "posterior", the acqf will consider the posterior probability that a point will be detected or not. target (float, optional): Threshold value to target in p-space. Default is None. query_set_size (int, optional): Number of points in the query set. Default is 256. Xq (torch.Tensor, optional): (m x d) global reference set. Default is None.
@@ -466,6 +481,8 @@
Args:
model (GPyTorchModel): The gpytorch model to use. training_data (None): Placeholder for compatibility; not used in this function.
- lookahead_type (str): Type of look-ahead to perform. Default is "levelset".
+ lookahead_type (Literal["levelset", "posterior"]): The type of look-ahead to perform (default is "levelset").
+ - If the lookahead_type is "levelset", the acqf will consider the posterior probability that a point is above or below the target level set.
+ - If the lookahead_type is "posterior", the acqf will consider the posterior probability that a point will be detected or not. target (float, optional): Target threshold value in probability space. Default is None. posterior_transform (PosteriorTransform, optional): Optional transformation to apply to the posterior. Default is None. query_set_size (int, optional): Number of points in the query set. Default is 256.
diff --git a/api/_modules/aepsych/benchmark/benchmark.html b/api/_modules/aepsych/benchmark/benchmark.html
index 292fcf019..0b45a641a 100644
--- a/api/_modules/aepsych/benchmark/benchmark.html
+++ b/api/_modules/aepsych/benchmark/benchmark.html
@@ -173,6 +173,7 @@
Source code for aepsych.benchmark.example_problems
# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved.importos
-fromtypingimportList,Optional,Union
+fromtypingimportList,Unionimportnumpyasnpimporttorch
@@ -29,7 +29,7 @@
Source code for aepsych.benchmark.example_problems
novel_discrimination_testfun,)fromaepsych.modelsimportGPClassificationModel
-fromaepsych.models.inducing_point_allocatorsimportKMeansAllocator
+fromaepsych.models.inducing_pointsimportKMeansAllocator"""The DiscrimLowDim, DiscrimHighDim, ContrastSensitivity6d, and Hartmann6Binary classesare copied from bernoulli_lse github repository (https://github.com/facebookresearch/bernoulli_lse)
@@ -122,13 +122,13 @@
Source code for aepsych.benchmark.example_problems
)
y=torch.LongTensor(self.data[:,0])x=torch.Tensor(self.data[:,1:])
+ inducing_size=100# Fit a model, with a large number of inducing pointsself.m=GPClassificationModel(
- lb=self.bounds[0],
- ub=self.bounds[1],
- inducing_size=100,
- inducing_point_method=KMeansAllocator(bounds=self.bounds),
+ dim=6,
+ inducing_size=inducing_size,
+ inducing_point_method=KMeansAllocator(dim=6),)self.m.fit(
diff --git a/api/_modules/aepsych/benchmark/example_problems/index.html b/api/_modules/aepsych/benchmark/example_problems/index.html
index d83152ffb..936afccc0 100644
--- a/api/_modules/aepsych/benchmark/example_problems/index.html
+++ b/api/_modules/aepsych/benchmark/example_problems/index.html
@@ -18,7 +18,7 @@
Source code for aepsych.benchmark.example_problems
# Copyright (c) Meta Platforms, Inc. and affiliates. All rights reserved.importos
-fromtypingimportList,Optional,Union
+fromtypingimportList,Unionimportnumpyasnpimporttorch
@@ -29,7 +29,7 @@
Source code for aepsych.benchmark.example_problems
novel_discrimination_testfun,)fromaepsych.modelsimportGPClassificationModel
-fromaepsych.models.inducing_point_allocatorsimportKMeansAllocator
+fromaepsych.models.inducing_pointsimportKMeansAllocator"""The DiscrimLowDim, DiscrimHighDim, ContrastSensitivity6d, and Hartmann6Binary classesare copied from bernoulli_lse github repository (https://github.com/facebookresearch/bernoulli_lse)
@@ -122,13 +122,13 @@
Source code for aepsych.benchmark.example_problems
)
y=torch.LongTensor(self.data[:,0])x=torch.Tensor(self.data[:,1:])
+ inducing_size=100# Fit a model, with a large number of inducing pointsself.m=GPClassificationModel(
- lb=self.bounds[0],
- ub=self.bounds[1],
- inducing_size=100,
- inducing_point_method=KMeansAllocator(bounds=self.bounds),
+ dim=6,
+ inducing_size=inducing_size,
+ inducing_point_method=KMeansAllocator(dim=6),)self.m.fit(
diff --git a/api/_modules/aepsych/generators/acqf_thompson_sampler_generator.html b/api/_modules/aepsych/generators/acqf_thompson_sampler_generator.html
index 7017658cc..43628ffd3 100644
--- a/api/_modules/aepsych/generators/acqf_thompson_sampler_generator.html
+++ b/api/_modules/aepsych/generators/acqf_thompson_sampler_generator.html
@@ -59,6 +59,8 @@
Source code for aepsych.generators.acqf_thompson_sampler_generator
Source code for aepsych.generators.acqf_thompson_sampler_generator
) ->None:"""Initialize OptimizeAcqfGenerator. Args:
+ lb (torch.Tensor): Lower bounds for the optimization.
+ ub (torch.Tensor): Upper bounds for the optimization. acqf (AcquisitionFunction): Acquisition function to use. acqf_kwargs (Dict[str, object], optional): Extra arguments to pass to acquisition function. Defaults to no arguments.
@@ -79,6 +83,8 @@
Source code for aepsych.generators.acqf_thompson_sampler_generator
self.acqf_kwargs=acqf_kwargsself.samps=sampsself.stimuli_per_trial=stimuli_per_trial
+ self.lb=lb
+ self.ub=ubdef_instantiate_acquisition_fn(self,model:ModelProtocol)->AcquisitionFunction:"""Instantiate the acquisition function with the model and any extra arguments.
@@ -142,7 +148,7 @@
Source code for aepsych.generators.acqf_thompson_sampler_generator
Source code for aepsych.generators.acqf_thompson_sampler_generator
) ->None:"""Initialize OptimizeAcqfGenerator. Args:
+ lb (torch.Tensor): Lower bounds for the optimization.
+ ub (torch.Tensor): Upper bounds for the optimization. acqf (AcquisitionFunction): Acquisition function to use. acqf_kwargs (Dict[str, object], optional): Extra arguments to pass to acquisition function. Defaults to no arguments.
@@ -79,6 +83,8 @@
Source code for aepsych.generators.acqf_thompson_sampler_generator
self.acqf_kwargs=acqf_kwargsself.samps=sampsself.stimuli_per_trial=stimuli_per_trial
+ self.lb=lb
+ self.ub=ubdef_instantiate_acquisition_fn(self,model:ModelProtocol)->AcquisitionFunction:"""Instantiate the acquisition function with the model and any extra arguments.
@@ -142,7 +148,7 @@
Source code for aepsych.generators.acqf_thompson_sampler_generator
Source code for aepsych.generators.epsilon_greedy_generator
[docs]classEpsilonGreedyGenerator(AEPsychGenerator):
- def__init__(self,subgenerator:AEPsychGenerator,epsilon:float=0.1)->None:
+ def__init__(
+ self,
+ lb:torch.Tensor,
+ ub:torch.Tensor,
+ subgenerator:AEPsychGenerator,
+ epsilon:float=0.1,
+ )->None:"""Initialize EpsilonGreedyGenerator. Args:
+ lb (torch.Tensor): Lower bounds for the optimization.
+ ub (torch.Tensor): Upper bounds for the optimization. subgenerator (AEPsychGenerator): The generator to use when not exploiting. epsilon (float): The probability of exploration. Defaults to 0.1. """self.subgenerator=subgeneratorself.epsilon=epsilon
+ self.lb=lb
+ self.ub=ub
[docs]defgen(self,num_points:int,model:ModelProtocol)->torch.Tensor:"""Query next point(s) to run by sampling from the subgenerator with probability 1-epsilon, and randomly otherwise.
@@ -71,7 +83,7 @@
Source code for aepsych.generators.epsilon_greedy_generator
if num_points>1:raiseNotImplementedError("Epsilon-greedy batched gen is not implemented!")ifnp.random.uniform()<self.epsilon:
- sample=np.random.uniform(low=model.lb,high=model.ub)
+ sample=np.random.uniform(low=self.lb,high=self.ub)returntorch.tensor(sample).reshape(1,-1)else:returnself.subgenerator.gen(num_points,model)
Source code for aepsych.generators.epsilon_greedy_generator
[docs]classEpsilonGreedyGenerator(AEPsychGenerator):
- def__init__(self,subgenerator:AEPsychGenerator,epsilon:float=0.1)->None:
+ def__init__(
+ self,
+ lb:torch.Tensor,
+ ub:torch.Tensor,
+ subgenerator:AEPsychGenerator,
+ epsilon:float=0.1,
+ )->None:"""Initialize EpsilonGreedyGenerator. Args:
+ lb (torch.Tensor): Lower bounds for the optimization.
+ ub (torch.Tensor): Upper bounds for the optimization. subgenerator (AEPsychGenerator): The generator to use when not exploiting. epsilon (float): The probability of exploration. Defaults to 0.1. """self.subgenerator=subgeneratorself.epsilon=epsilon
+ self.lb=lb
+ self.ub=ub
[docs]defgen(self,num_points:int,model:ModelProtocol)->torch.Tensor:"""Query next point(s) to run by sampling from the subgenerator with probability 1-epsilon, and randomly otherwise.
@@ -71,7 +83,7 @@
Source code for aepsych.generators.epsilon_greedy_generator
if num_points>1:raiseNotImplementedError("Epsilon-greedy batched gen is not implemented!")ifnp.random.uniform()<self.epsilon:
- sample=np.random.uniform(low=model.lb,high=model.ub)
+ sample=np.random.uniform(low=self.lb,high=self.ub)returntorch.tensor(sample).reshape(1,-1)else:returnself.subgenerator.gen(num_points,model)
Source code for aepsych.generators.monotonic_rejection_generator
from aepsych.configimportConfigfromaepsych.generators.baseimportAEPsychGeneratorfromaepsych.models.monotonic_rejection_gpimportMonotonicRejectionGP
+fromaepsych.utilsimport_process_boundsfrombotorch.acquisitionimportAcquisitionFunctionfrombotorch.loggingimportloggerfrombotorch.optim.initializersimportgen_batch_initial_conditions
@@ -61,13 +62,17 @@
Source code for aepsych.generators.monotonic_rejection_generator
def __init__(self,acqf:MonotonicMCAcquisition,
+ lb:torch.Tensor,
+ ub:torch.Tensor,acqf_kwargs:Optional[Dict[str,Any]]=None,model_gen_options:Optional[Dict[str,Any]]=None,explore_features:Optional[Sequence[int]]=None,)->None:"""Initialize MonotonicRejectionGenerator. Args:
- acqf (MonotonicMCAcquisition): Acquisition function to use.
+ acqf (AcquisitionFunction): Acquisition function to use.
+ lb (torch.Tensor): Lower bounds for the optimization.
+ ub (torch.Tensor): Upper bounds for the optimization. acqf_kwargs (Dict[str, object], optional): Extra arguments to pass to acquisition function. Defaults to None. model_gen_options (Dict[str, Any], optional): Dictionary with options for generating candidate, such as
@@ -81,6 +86,8 @@
Source code for aepsych.generators.monotonic_rejection_generator
Source code for aepsych.generators.monotonic_rejection_generator
)
# Augment bounds with deriv indicator
- bounds=torch.cat((model.bounds_,torch.zeros(2,1)),dim=1)
+ bounds=torch.cat((self.bounds,torch.zeros(2,1)),dim=1)# Fix deriv indicator to 0 during optimizationfixed_features={(bounds.shape[1]-1):0.0}# Fix explore features to random values
@@ -210,6 +217,8 @@
Source code for aepsych.generators.monotonic_rejection_generator
Source code for aepsych.generators.monotonic_rejection_generator
from aepsych.configimportConfigfromaepsych.generators.baseimportAEPsychGeneratorfromaepsych.models.monotonic_rejection_gpimportMonotonicRejectionGP
+fromaepsych.utilsimport_process_boundsfrombotorch.acquisitionimportAcquisitionFunctionfrombotorch.loggingimportloggerfrombotorch.optim.initializersimportgen_batch_initial_conditions
@@ -61,13 +62,17 @@
Source code for aepsych.generators.monotonic_rejection_generator
def __init__(self,acqf:MonotonicMCAcquisition,
+ lb:torch.Tensor,
+ ub:torch.Tensor,acqf_kwargs:Optional[Dict[str,Any]]=None,model_gen_options:Optional[Dict[str,Any]]=None,explore_features:Optional[Sequence[int]]=None,)->None:"""Initialize MonotonicRejectionGenerator. Args:
- acqf (MonotonicMCAcquisition): Acquisition function to use.
+ acqf (AcquisitionFunction): Acquisition function to use.
+ lb (torch.Tensor): Lower bounds for the optimization.
+ ub (torch.Tensor): Upper bounds for the optimization. acqf_kwargs (Dict[str, object], optional): Extra arguments to pass to acquisition function. Defaults to None. model_gen_options (Dict[str, Any], optional): Dictionary with options for generating candidate, such as
@@ -81,6 +86,8 @@
Source code for aepsych.generators.monotonic_rejection_generator
Source code for aepsych.generators.monotonic_rejection_generator
)
# Augment bounds with deriv indicator
- bounds=torch.cat((model.bounds_,torch.zeros(2,1)),dim=1)
+ bounds=torch.cat((self.bounds,torch.zeros(2,1)),dim=1)# Fix deriv indicator to 0 during optimizationfixed_features={(bounds.shape[1]-1):0.0}# Fix explore features to random values
@@ -210,6 +217,8 @@
Source code for aepsych.generators.monotonic_rejection_generator
Source code for aepsych.generators.optimize_acqf_generator
# LICENSE file in the root directory of this source tree.
from__future__importannotations
+importinspectimporttimefromtypingimportAny,Dict,Optional
-importnumpyasnpimporttorchfromaepsych.configimportConfigfromaepsych.generators.baseimportAEPsychGenerator
@@ -57,6 +57,8 @@
Source code for aepsych.generators.optimize_acqf_generator
Source code for aepsych.generators.optimize_acqf_generator
) ->None:"""Initialize OptimizeAcqfGenerator. Args:
+ lb (torch.Tensor): Lower bounds for the optimization.
+ ub (torch.Tensor): Upper bounds for the optimization. acqf (AcquisitionFunction): Acquisition function to use. acqf_kwargs (Dict[str, object], optional): Extra arguments to pass to acquisition function. Defaults to no arguments.
@@ -83,6 +87,8 @@
Source code for aepsych.generators.optimize_acqf_generator
[docs]defgen(self,num_points:int,model:ModelProtocol,**gen_options)->torch.Tensor:"""Query next point(s) to run by optimizing the acquisition function.
@@ -142,12 +167,16 @@
Source code for aepsych.generators.optimize_acqf_generator
Source code for aepsych.generators.optimize_acqf_generator
restart and sample parameters, maximum generation time, and stimuli per trial.
"""classname=cls.__name__
+ lb=config.gettensor(classname,"lb")
+ ub=config.gettensor(classname,"ub")acqf=config.getobj(classname,"acqf",fallback=None)extra_acqf_args=cls._get_acqf_options(acqf,config)stimuli_per_trial=config.getint(classname,"stimuli_per_trial")
@@ -179,6 +210,8 @@
Source code for aepsych.generators.optimize_acqf_generator
Source code for aepsych.generators.optimize_acqf_generator
# LICENSE file in the root directory of this source tree.
from__future__importannotations
+importinspectimporttimefromtypingimportAny,Dict,Optional
-importnumpyasnpimporttorchfromaepsych.configimportConfigfromaepsych.generators.baseimportAEPsychGenerator
@@ -57,6 +57,8 @@
Source code for aepsych.generators.optimize_acqf_generator
Source code for aepsych.generators.optimize_acqf_generator
) ->None:"""Initialize OptimizeAcqfGenerator. Args:
+ lb (torch.Tensor): Lower bounds for the optimization.
+ ub (torch.Tensor): Upper bounds for the optimization. acqf (AcquisitionFunction): Acquisition function to use. acqf_kwargs (Dict[str, object], optional): Extra arguments to pass to acquisition function. Defaults to no arguments.
@@ -83,6 +87,8 @@
Source code for aepsych.generators.optimize_acqf_generator
[docs]defgen(self,num_points:int,model:ModelProtocol,**gen_options)->torch.Tensor:"""Query next point(s) to run by optimizing the acquisition function.
@@ -142,12 +167,16 @@
Source code for aepsych.generators.optimize_acqf_generator
Source code for aepsych.generators.optimize_acqf_generator
restart and sample parameters, maximum generation time, and stimuli per trial.
"""classname=cls.__name__
+ lb=config.gettensor(classname,"lb")
+ ub=config.gettensor(classname,"ub")acqf=config.getobj(classname,"acqf",fallback=None)extra_acqf_args=cls._get_acqf_options(acqf,config)stimuli_per_trial=config.getint(classname,"stimuli_per_trial")
@@ -179,6 +210,8 @@
Source code for aepsych.generators.optimize_acqf_generator
[docs]defget_max(
- self:ModelProtocol,
- locked_dims:Optional[Mapping[int,float]]=None,
- probability_space:bool=False,
- n_samples:int=1000,
- max_time:Optional[float]=None,
- )->Tuple[float,torch.Tensor]:
-"""Return the maximum of the modeled function, subject to constraints
-
- Args:
- locked_dims (Mapping[int, List[float]], optional): Dimensions to fix, so that the
- max is along a slice of the full surface. Defaults to None.
- probability_space (bool): Is y (and therefore the returned nearest_y) in
- probability space instead of latent function space? Defaults to False.
- n_samples (int): number of coarse grid points to sample for optimization estimate.
- max_time (float, optional): Maximum time to spend optimizing. Defaults to None.
-
- Returns:
- Tuple[float, torch.Tensor]: Tuple containing the max and its location (argmax).
- """
- locked_dims=locked_dimsor{}
- _,_arg=get_extremum(
- self,"max",self.bounds,locked_dims,n_samples,max_time=max_time
- )
- arg=torch.tensor(_arg.reshape(1,self.dim))
- ifprobability_space:
- val,_=self.predict_probability(arg)
- else:
- val,_=self.predict(arg)
- returnfloat(val.item()),arg
-
-
[docs]defget_min(
- self:ModelProtocol,
- locked_dims:Optional[Mapping[int,float]]=None,
- probability_space:bool=False,
- n_samples:int=1000,
- max_time:Optional[float]=None,
- )->Tuple[float,torch.Tensor]:
-"""Return the minimum of the modeled function, subject to constraints
- Args:
- locked_dims (Mapping[int, List[float]], optional): Dimensions to fix, so that the
- min is along a slice of the full surface.
- probability_space (bool): Is y (and therefore the returned nearest_y) in
- probability space instead of latent function space? Defaults to False.
- n_samples (int): number of coarse grid points to sample for optimization estimate.
- max_time (float, optional): Maximum time to spend optimizing. Defaults to None.
-
- Returns:
- Tuple[float, torch.Tensor]: Tuple containing the min and its location (argmin).
- """
- locked_dims=locked_dimsor{}
- _,_arg=get_extremum(
- self,"min",self.bounds,locked_dims,n_samples,max_time=max_time
- )
- arg=torch.tensor(_arg.reshape(1,self.dim))
- ifprobability_space:
- val,_=self.predict_probability(arg)
- else:
- val,_=self.predict(arg)
- returnfloat(val.item()),arg
-
-
[docs]definv_query(
- self,
- y:float,
- locked_dims:Optional[Mapping[int,float]]=None,
- probability_space:bool=False,
- n_samples:int=1000,
- max_time:Optional[float]=None,
- weights:Optional[torch.Tensor]=None,
- )->Tuple[float,torch.Tensor]:
-"""Query the model inverse.
- Return nearest x such that f(x) = queried y, and also return the
- value of f at that point.
-
- Args:
- y (float): Points at which to find the inverse.
- locked_dims (Mapping[int, float], optional): Dimensions to fix, so that the
- inverse is along a slice of the full surface.
- probability_space (bool): Is y (and therefore the returned nearest_y) in
- probability space instead of latent function space? Defaults to False.
- n_samples (int): number of coarse grid points to sample for optimization estimate. Defaults to 1000.
- max_time (float, optional): Maximum time to spend optimizing. Defaults to None.
- weights (torch.Tensor, optional): Weights for the optimization. Defaults to None.
-
- Returns:
- Tuple[float, torch.Tensor]: Tuple containing the value of f
- nearest to queried y and the x position of this value.
- """
- _,_arg=inv_query(
- self,
- y=y,
- bounds=self.bounds,
- locked_dims=locked_dims,
- probability_space=probability_space,
- n_samples=n_samples,
- max_time=max_time,
- weights=weights,
- )
- arg=torch.tensor(_arg.reshape(1,self.dim))
- ifprobability_space:
- val,_=self.predict_probability(arg.reshape(1,self.dim))
- else:
- val,_=self.predict(arg.reshape(1,self.dim))
- returnfloat(val.item()),arg
-
-
[docs]defget_jnd(
- self:ModelProtocol,
- grid:Optional[torch.Tensor]=None,
- cred_level:Optional[float]=None,
- intensity_dim:int=-1,
- confsamps:int=500,
- method:str="step",
- )->Union[torch.Tensor,Tuple[torch.Tensor,torch.Tensor,torch.Tensor]]:
-"""Calculate the JND.
-
- Note that JND can have multiple plausible definitions
- outside of the linear case, so we provide options for how to compute it.
- For method="step", we report how far one needs to go over in stimulus
- space to move 1 unit up in latent space (this is a lot of people's
- conventional understanding of the JND).
- For method="taylor", we report the local derivative, which also maps to a
- 1st-order Taylor expansion of the latent function. This is a formal
- generalization of JND as defined in Weber's law.
- Both definitions are equivalent for linear psychometric functions.
-
- Args:
- grid (torch.Tensor, optional): Mesh grid over which to find the JND.
- Defaults to a square grid of size as determined by aepsych.utils.dim_grid.
- cred_level (float, optional): Credible level for computing an interval.
- Defaults to None, computing no interval.
- intensity_dim (int): Dimension over which to compute the JND.
- Defaults to -1.
- confsamps (int): Number of posterior samples to use for
- computing the credible interval. Defaults to 500.
- method (str): "taylor" or "step" method (see docstring).
- Defaults to "step".
-
- Returns:
- Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor, torch.Tensor]]: either the
- mean JND, or a median, lower, upper tuple of the JND posterior.
- """
- ifgridisNone:
- grid=self.dim_grid()
- elifisinstance(grid,np.ndarray):
- grid=torch.tensor(grid)
-
- # this is super awkward, back into intensity dim grid assuming a square grid
- gridsize=int(grid.shape[0]**(1/grid.shape[1]))
- coords=torch.linspace(
- self.lb[intensity_dim].item(),self.ub[intensity_dim].item(),gridsize
- )
-
- ifcred_levelisNone:
- fmean,_=self.predict(grid)
- fmean=fmean.reshape(*[gridsizeforiinrange(self.dim)])
-
- ifmethod=="taylor":
- returntorch.tensor(1/np.gradient(fmean,coords,axis=intensity_dim))
- elifmethod=="step":
- returntorch.clip(
- get_jnd_multid(
- fmean,
- coords,
- mono_dim=intensity_dim,
- ),
- 0,
- np.inf,
- )
-
- alpha=1-cred_level# type: ignore
- qlower=alpha/2
- qupper=1-alpha/2
-
- fsamps=self.sample(grid,confsamps)
- ifmethod=="taylor":
- jnds=torch.tensor(
- 1
- /np.gradient(
- fsamps.reshape(confsamps,*[gridsizeforiinrange(self.dim)]),
- coords,
- axis=intensity_dim,
- )
- )
- elifmethod=="step":
- samps=[s.reshape((gridsize,)*self.dim)forsinfsamps]
- jnds=torch.stack(
- [get_jnd_multid(s,coords,mono_dim=intensity_dim)forsinsamps]
- )
- else:
- raiseRuntimeError(f"Unknown method {method}!")
- upper=torch.clip(torch.quantile(jnds,qupper,axis=0),0,np.inf)# type: ignore
- lower=torch.clip(torch.quantile(jnds,qlower,axis=0),0,np.inf)# type: ignore
- median=torch.clip(torch.quantile(jnds,0.5,axis=0),0,np.inf)# type: ignore
- returnmedian,lower,upper
-
-
[docs]defdim_grid(
- self:ModelProtocol,
- gridsize:int=30,
- slice_dims:Optional[Mapping[int,float]]=None,
- )->torch.Tensor:
-"""Generate a grid based on lower, upper, and dim.
-
- Args:
- gridsize (int): Number of points in each dimension. Defaults to 30.
- slice_dims (Mapping[int, float], optional): Dimensions to fix at a certain value. Defaults to None.
- """
- returndim_grid(self.lb,self.ub,gridsize,slice_dims)
[docs]defget_max(
- self:ModelProtocol,
- locked_dims:Optional[Mapping[int,float]]=None,
- probability_space:bool=False,
- n_samples:int=1000,
- max_time:Optional[float]=None,
- )->Tuple[float,torch.Tensor]:
-"""Return the maximum of the modeled function, subject to constraints
-
- Args:
- locked_dims (Mapping[int, List[float]], optional): Dimensions to fix, so that the
- max is along a slice of the full surface. Defaults to None.
- probability_space (bool): Is y (and therefore the returned nearest_y) in
- probability space instead of latent function space? Defaults to False.
- n_samples (int): number of coarse grid points to sample for optimization estimate.
- max_time (float, optional): Maximum time to spend optimizing. Defaults to None.
-
- Returns:
- Tuple[float, torch.Tensor]: Tuple containing the max and its location (argmax).
- """
- locked_dims=locked_dimsor{}
- _,_arg=get_extremum(
- self,"max",self.bounds,locked_dims,n_samples,max_time=max_time
- )
- arg=torch.tensor(_arg.reshape(1,self.dim))
- ifprobability_space:
- val,_=self.predict_probability(arg)
- else:
- val,_=self.predict(arg)
- returnfloat(val.item()),arg
-
-
[docs]defget_min(
- self:ModelProtocol,
- locked_dims:Optional[Mapping[int,float]]=None,
- probability_space:bool=False,
- n_samples:int=1000,
- max_time:Optional[float]=None,
- )->Tuple[float,torch.Tensor]:
-"""Return the minimum of the modeled function, subject to constraints
- Args:
- locked_dims (Mapping[int, List[float]], optional): Dimensions to fix, so that the
- min is along a slice of the full surface.
- probability_space (bool): Is y (and therefore the returned nearest_y) in
- probability space instead of latent function space? Defaults to False.
- n_samples (int): number of coarse grid points to sample for optimization estimate.
- max_time (float, optional): Maximum time to spend optimizing. Defaults to None.
-
- Returns:
- Tuple[float, torch.Tensor]: Tuple containing the min and its location (argmin).
- """
- locked_dims=locked_dimsor{}
- _,_arg=get_extremum(
- self,"min",self.bounds,locked_dims,n_samples,max_time=max_time
- )
- arg=torch.tensor(_arg.reshape(1,self.dim))
- ifprobability_space:
- val,_=self.predict_probability(arg)
- else:
- val,_=self.predict(arg)
- returnfloat(val.item()),arg
-
-
[docs]definv_query(
- self,
- y:float,
- locked_dims:Optional[Mapping[int,float]]=None,
- probability_space:bool=False,
- n_samples:int=1000,
- max_time:Optional[float]=None,
- weights:Optional[torch.Tensor]=None,
- )->Tuple[float,torch.Tensor]:
-"""Query the model inverse.
- Return nearest x such that f(x) = queried y, and also return the
- value of f at that point.
-
- Args:
- y (float): Points at which to find the inverse.
- locked_dims (Mapping[int, float], optional): Dimensions to fix, so that the
- inverse is along a slice of the full surface.
- probability_space (bool): Is y (and therefore the returned nearest_y) in
- probability space instead of latent function space? Defaults to False.
- n_samples (int): number of coarse grid points to sample for optimization estimate. Defaults to 1000.
- max_time (float, optional): Maximum time to spend optimizing. Defaults to None.
- weights (torch.Tensor, optional): Weights for the optimization. Defaults to None.
-
- Returns:
- Tuple[float, torch.Tensor]: Tuple containing the value of f
- nearest to queried y and the x position of this value.
- """
- _,_arg=inv_query(
- self,
- y=y,
- bounds=self.bounds,
- locked_dims=locked_dims,
- probability_space=probability_space,
- n_samples=n_samples,
- max_time=max_time,
- weights=weights,
- )
- arg=torch.tensor(_arg.reshape(1,self.dim))
- ifprobability_space:
- val,_=self.predict_probability(arg.reshape(1,self.dim))
- else:
- val,_=self.predict(arg.reshape(1,self.dim))
- returnfloat(val.item()),arg
-
-
[docs]defget_jnd(
- self:ModelProtocol,
- grid:Optional[torch.Tensor]=None,
- cred_level:Optional[float]=None,
- intensity_dim:int=-1,
- confsamps:int=500,
- method:str="step",
- )->Union[torch.Tensor,Tuple[torch.Tensor,torch.Tensor,torch.Tensor]]:
-"""Calculate the JND.
-
- Note that JND can have multiple plausible definitions
- outside of the linear case, so we provide options for how to compute it.
- For method="step", we report how far one needs to go over in stimulus
- space to move 1 unit up in latent space (this is a lot of people's
- conventional understanding of the JND).
- For method="taylor", we report the local derivative, which also maps to a
- 1st-order Taylor expansion of the latent function. This is a formal
- generalization of JND as defined in Weber's law.
- Both definitions are equivalent for linear psychometric functions.
-
- Args:
- grid (torch.Tensor, optional): Mesh grid over which to find the JND.
- Defaults to a square grid of size as determined by aepsych.utils.dim_grid.
- cred_level (float, optional): Credible level for computing an interval.
- Defaults to None, computing no interval.
- intensity_dim (int): Dimension over which to compute the JND.
- Defaults to -1.
- confsamps (int): Number of posterior samples to use for
- computing the credible interval. Defaults to 500.
- method (str): "taylor" or "step" method (see docstring).
- Defaults to "step".
-
- Returns:
- Union[torch.Tensor, Tuple[torch.Tensor, torch.Tensor, torch.Tensor]]: either the
- mean JND, or a median, lower, upper tuple of the JND posterior.
- """
- ifgridisNone:
- grid=self.dim_grid()
- elifisinstance(grid,np.ndarray):
- grid=torch.tensor(grid)
-
- # this is super awkward, back into intensity dim grid assuming a square grid
- gridsize=int(grid.shape[0]**(1/grid.shape[1]))
- coords=torch.linspace(
- self.lb[intensity_dim].item(),self.ub[intensity_dim].item(),gridsize
- )
-
- ifcred_levelisNone:
- fmean,_=self.predict(grid)
- fmean=fmean.reshape(*[gridsizeforiinrange(self.dim)])
-
- ifmethod=="taylor":
- returntorch.tensor(1/np.gradient(fmean,coords,axis=intensity_dim))
- elifmethod=="step":
- returntorch.clip(
- get_jnd_multid(
- fmean,
- coords,
- mono_dim=intensity_dim,
- ),
- 0,
- np.inf,
- )
-
- alpha=1-cred_level# type: ignore
- qlower=alpha/2
- qupper=1-alpha/2
-
- fsamps=self.sample(grid,confsamps)
- ifmethod=="taylor":
- jnds=torch.tensor(
- 1
- /np.gradient(
- fsamps.reshape(confsamps,*[gridsizeforiinrange(self.dim)]),
- coords,
- axis=intensity_dim,
- )
- )
- elifmethod=="step":
- samps=[s.reshape((gridsize,)*self.dim)forsinfsamps]
- jnds=torch.stack(
- [get_jnd_multid(s,coords,mono_dim=intensity_dim)forsinsamps]
- )
- else:
- raiseRuntimeError(f"Unknown method {method}!")
- upper=torch.clip(torch.quantile(jnds,qupper,axis=0),0,np.inf)# type: ignore
- lower=torch.clip(torch.quantile(jnds,qlower,axis=0),0,np.inf)# type: ignore
- median=torch.clip(torch.quantile(jnds,0.5,axis=0),0,np.inf)# type: ignore
- returnmedian,lower,upper
-
-
[docs]defdim_grid(
- self:ModelProtocol,
- gridsize:int=30,
- slice_dims:Optional[Mapping[int,float]]=None,
- )->torch.Tensor:
-"""Generate a grid based on lower, upper, and dim.
-
- Args:
- gridsize (int): Number of points in each dimension. Defaults to 30.
- slice_dims (Mapping[int, float], optional): Dimensions to fix at a certain value. Defaults to None.
- """
- returndim_grid(self.lb,self.ub,gridsize,slice_dims)
# LICENSE file in the root directory of this source tree.from__future__importannotations
-importwarningsfromcopyimportdeepcopyfromtypingimportAny,Dict,Optional,Tuple
@@ -34,16 +33,10 @@
def__init__(self,
- lb:torch.Tensor,
- ub:torch.Tensor,
- inducing_point_method:InducingPointAllocator,
- dim:Optional[int]=None,
+ dim:int,mean_module:Optional[gpytorch.means.Mean]=None,covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Likelihood]=None,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,
- inducing_points:Optional[torch.Tensor]=None,)->None:"""Initialize the GP Classification model Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method to use for selecting inducing points.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub.
+ dim (int): The number of dimensions in the parameter space. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a gamma prior. likelihood (gpytorch.likelihood.Likelihood, optional): The likelihood function to use. If None defaults to Bernouli likelihood.
- inducing_size (int, optional): Number of inducing points. Defaults to 99.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): Number of inducing points. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during fitting. Assumes we are using L-BFGS-B. """
- lb,ub,self.dim=_process_bounds(lb,ub,dim)
+ self.dim=dimself.max_fit_time=max_fit_time
- self.inducing_size=inducing_sizeor99
+ self.inducing_size=inducing_sizeself.optimizer_options=({"options":optimizer_options}ifoptimizer_optionselse{"options":{}}
@@ -129,15 +117,14 @@
)super().__init__(variational_strategy)
- # Tensors need to be directly registered, Modules themselves can be assigned as attr
- self.register_buffer("lb",lb)
- self.register_buffer("ub",ub)self.likelihood=likelihoodself.mean_module=mean_moduleordefault_meanself.covar_module=covar_moduleordefault_covar
@@ -175,11 +159,11 @@
max_fit_time=config.getfloat(classname,"max_fit_time",fallback=None)inducing_point_method_class=config.getobj(
- classname,"inducing_point_method",fallback=AutoAllocator
+ classname,"inducing_point_method",fallback=GreedyVarianceReduction)# Check if allocator class has a `from_config` methodifhasattr(inducing_point_method_class,"from_config"):
@@ -210,8 +194,6 @@
def__init__(self,
- lb:torch.Tensor,
- ub:torch.Tensor,
- inducing_point_method:InducingPointAllocator,
- dim:Optional[int]=None,
+ dim:int,mean_module:Optional[gpytorch.means.Mean]=None,covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Likelihood]=None,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,)->None:"""Initialize the GP Beta Regression model Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method to use to select the inducing points.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub. Defaults to None.
+ dim (int): The number of dimensions in the parameter space. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. Defaults to None. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a gamma prior. likelihood (gpytorch.likelihood.Likelihood, optional): The likelihood function to use. If None defaults to Beta likelihood.
- inducing_size (int, optional): Number of inducing points. Defaults to 100.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): Number of inducing points. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. Defaults to None. """iflikelihoodisNone:likelihood=BetaLikelihood()
- self.inducing_point_method=inducing_point_method
+
super().__init__(
- lb=lb,
- ub=ub,dim=dim,mean_module=mean_module,covar_module=covar_module,
diff --git a/api/_modules/aepsych/models/gp_classification/index.html b/api/_modules/aepsych/models/gp_classification/index.html
index adba99465..d3d3e7d94 100644
--- a/api/_modules/aepsych/models/gp_classification/index.html
+++ b/api/_modules/aepsych/models/gp_classification/index.html
@@ -24,7 +24,6 @@
Source code for aepsych.models.gp_classification
# LICENSE file in the root directory of this source tree.from__future__importannotations
-importwarningsfromcopyimportdeepcopyfromtypingimportAny,Dict,Optional,Tuple
@@ -34,16 +33,10 @@
def__init__(self,
- lb:torch.Tensor,
- ub:torch.Tensor,
- inducing_point_method:InducingPointAllocator,
- dim:Optional[int]=None,
+ dim:int,mean_module:Optional[gpytorch.means.Mean]=None,covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Likelihood]=None,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,
- inducing_points:Optional[torch.Tensor]=None,)->None:"""Initialize the GP Classification model Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method to use for selecting inducing points.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub.
+ dim (int): The number of dimensions in the parameter space. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a gamma prior. likelihood (gpytorch.likelihood.Likelihood, optional): The likelihood function to use. If None defaults to Bernouli likelihood.
- inducing_size (int, optional): Number of inducing points. Defaults to 99.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): Number of inducing points. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during fitting. Assumes we are using L-BFGS-B. """
- lb,ub,self.dim=_process_bounds(lb,ub,dim)
+ self.dim=dimself.max_fit_time=max_fit_time
- self.inducing_size=inducing_sizeor99
+ self.inducing_size=inducing_sizeself.optimizer_options=({"options":optimizer_options}ifoptimizer_optionselse{"options":{}}
@@ -129,15 +117,14 @@
)super().__init__(variational_strategy)
- # Tensors need to be directly registered, Modules themselves can be assigned as attr
- self.register_buffer("lb",lb)
- self.register_buffer("ub",ub)self.likelihood=likelihoodself.mean_module=mean_moduleordefault_meanself.covar_module=covar_moduleordefault_covar
@@ -175,11 +159,11 @@
max_fit_time=config.getfloat(classname,"max_fit_time",fallback=None)inducing_point_method_class=config.getobj(
- classname,"inducing_point_method",fallback=AutoAllocator
+ classname,"inducing_point_method",fallback=GreedyVarianceReduction)# Check if allocator class has a `from_config` methodifhasattr(inducing_point_method_class,"from_config"):
@@ -210,8 +194,6 @@
def__init__(self,
- lb:torch.Tensor,
- ub:torch.Tensor,
- inducing_point_method:InducingPointAllocator,
- dim:Optional[int]=None,
+ dim:int,mean_module:Optional[gpytorch.means.Mean]=None,covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Likelihood]=None,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,)->None:"""Initialize the GP Beta Regression model Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method to use to select the inducing points.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub. Defaults to None.
+ dim (int): The number of dimensions in the parameter space. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. Defaults to None. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a gamma prior. likelihood (gpytorch.likelihood.Likelihood, optional): The likelihood function to use. If None defaults to Beta likelihood.
- inducing_size (int, optional): Number of inducing points. Defaults to 100.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): Number of inducing points. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. Defaults to None. """iflikelihoodisNone:likelihood=BetaLikelihood()
- self.inducing_point_method=inducing_point_method
+
super().__init__(
- lb=lb,
- ub=ub,dim=dim,mean_module=mean_module,covar_module=covar_module,
diff --git a/api/_modules/aepsych/models/gp_regression.html b/api/_modules/aepsych/models/gp_regression.html
index abcc1f382..e227f4853 100644
--- a/api/_modules/aepsych/models/gp_regression.html
+++ b/api/_modules/aepsych/models/gp_regression.html
@@ -28,12 +28,11 @@
Source code for aepsych.models.gp_regression
from typingimportAny,Dict,Optional,Tupleimportgpytorch
-importnumpyasnpimporttorchfromaepsych.configimportConfigfromaepsych.factory.defaultimportdefault_mean_covar_factoryfromaepsych.models.baseimportAEPsychModelDeviceMixin
-fromaepsych.utilsimport_process_bounds,get_optimizer_options,promote_0d
+fromaepsych.utilsimportget_dims,get_optimizer_options,promote_0dfromaepsych.utils_loggingimportgetLoggerfromgpytorch.likelihoodsimportGaussianLikelihood,Likelihoodfromgpytorch.modelsimportExactGP
@@ -51,9 +50,7 @@
"""Initialize the GP regression model Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub.
+ dim (int): The number of dimensions in the parameter space. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a gamma prior.
@@ -77,12 +71,13 @@
Source code for aepsych.models.gp_regression
optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during
fitting. Assumes we are using L-BFGS-B. """
+ self.dim=dim
+
iflikelihoodisNone:likelihood=GaussianLikelihood()super().__init__(None,None,likelihood)
- lb,ub,self.dim=_process_bounds(lb,ub,dim)self.max_fit_time=max_fit_timeself.optimizer_options=(
@@ -91,12 +86,10 @@
Source code for aepsych.models.gp_regression
if mean_moduleisNoneorcovar_moduleisNone:default_mean,default_covar=default_mean_covar_factory(
- dim=self.dim,stimuli_per_trial=self.stimuli_per_trial
+ dim=self.dim,
+ stimuli_per_trial=self.stimuli_per_trial,)
- # Tensors need to be directly registered, Modules themselves can be assigned as attr
- self.register_buffer("lb",lb)
- self.register_buffer("ub",ub)self.likelihood=likelihoodself.mean_module=mean_moduleordefault_meanself.covar_module=covar_moduleordefault_covar
@@ -116,9 +109,9 @@
"""Initialize the GP regression model Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub.
+ dim (int): The number of dimensions in the parameter space. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a gamma prior.
@@ -77,12 +71,13 @@
Source code for aepsych.models.gp_regression
optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during
fitting. Assumes we are using L-BFGS-B. """
+ self.dim=dim
+
iflikelihoodisNone:likelihood=GaussianLikelihood()super().__init__(None,None,likelihood)
- lb,ub,self.dim=_process_bounds(lb,ub,dim)self.max_fit_time=max_fit_timeself.optimizer_options=(
@@ -91,12 +86,10 @@
Source code for aepsych.models.gp_regression
if mean_moduleisNoneorcovar_moduleisNone:default_mean,default_covar=default_mean_covar_factory(
- dim=self.dim,stimuli_per_trial=self.stimuli_per_trial
+ dim=self.dim,
+ stimuli_per_trial=self.stimuli_per_trial,)
- # Tensors need to be directly registered, Modules themselves can be assigned as attr
- self.register_buffer("lb",lb)
- self.register_buffer("ub",ub)self.likelihood=likelihoodself.mean_module=mean_moduleordefault_meanself.covar_module=covar_moduleordefault_covar
@@ -116,9 +109,9 @@
Source code for aepsych.models.inducing_point_allocators
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-fromabcimportABC,abstractmethod
-fromtypingimportAny,Dict,Optional,Union
-
-importnumpyasnp
-importtorch
-
-fromaepsych.configimportConfig,ConfigurableMixin
-fromaepsych.utilsimportget_bounds
-frombotorch.models.utils.inducing_point_allocatorsimport(
- GreedyVarianceReductionasBaseGreedyVarianceReduction,
- InducingPointAllocator,
-)
-frombotorch.utils.samplingimportdraw_sobol_samples
-fromscipy.cluster.vqimportkmeans2
-
-
-classBaseAllocator(InducingPointAllocator,ConfigurableMixin):
-"""Base class for inducing point allocators."""
-
- def__init__(self,bounds:Optional[torch.Tensor]=None)->None:
-"""
- Initialize the allocator with optional bounds.
-
- Args:
- bounds (torch.Tensor, optional): Bounds for allocating points. Should be of shape (2, d).
- """
- self.bounds=bounds
- self.dim=self._initialize_dim()
-
- def_initialize_dim(self)->Optional[int]:
-"""
- Initialize the dimension `dim` based on the bounds, if available.
-
- Returns:
- int: The dimension `d` if bounds are provided, or None otherwise.
- """
- ifself.boundsisnotNone:
- # Validate bounds and extract dimension
- assertself.bounds.shape[0]==2,"Bounds must have shape (2, d)!"
- lb,ub=self.bounds[0],self.bounds[1]
- fori,(l,u)inenumerate(zip(lb,ub)):
- assert(
- l<=u
- ),f"Lower bound {l} is not less than or equal to upper bound {u} on dimension {i}!"
- returnself.bounds.shape[1]# Number of dimensions (d)
- returnNone
-
- def_determine_dim_from_inputs(self,inputs:torch.Tensor)->int:
-"""
- Determine dimension `dim` from the inputs tensor.
-
- Args:
- inputs (torch.Tensor): Input tensor of shape (..., d).
-
- Returns:
- int: The inferred dimension `d`.
- """
- returninputs.shape[-1]
-
- @abstractmethod
- defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor],
- covar_module:Optional[torch.nn.Module],
- num_inducing:int,
- input_batch_shape:torch.Size,
- )->torch.Tensor:
-"""
- Abstract method for allocating inducing points.
-
- Args:
- inputs (torch.Tensor, optional): Input tensor, implementation-specific.
- covar_module (torch.nn.Module, optional): Kernel covariance module.
- num_inducing (int): Number of inducing points to allocate.
- input_batch_shape (torch.Size): Shape of the input batch.
-
- Returns:
- torch.Tensor: Allocated inducing points.
- """
- ifself.dimisNoneandinputsisnotNone:
- self.dim=self._determine_dim_from_inputs(inputs)
-
- raiseNotImplementedError("This method should be implemented by subclasses.")
-
- @abstractmethod
- def_get_quality_function(self)->Optional[Any]:
-"""
- Abstract method for returning a quality function if required.
-
- Returns:
- None or Callable: Quality function if needed.
- """
- raiseNotImplementedError("This method should be implemented by subclasses.")
-
-
-
[docs]classSobolAllocator(BaseAllocator):
-"""An inducing point allocator that uses Sobol sequences to allocate inducing points."""
-
- def__init__(self,bounds:torch.Tensor)->None:
-"""Initialize the SobolAllocator with bounds."""
- self.bounds:torch.Tensor=bounds
- super().__init__(bounds=bounds)
-
- def_get_quality_function(self)->None:
-"""Sobol sampling does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""
- Generates `num_inducing` inducing points within the specified bounds using Sobol sampling.
-
- Args:
- inputs (torch.Tensor): Input tensor, not required for Sobol sampling.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
-
- Returns:
- torch.Tensor: A (num_inducing, d)-dimensional tensor of inducing points within the specified bounds.
-
- Raises:
- ValueError: If `bounds` is not provided.
- """
-
- # Validate bounds shape
- assert(
- self.bounds.shape[0]==2
- ),"Bounds must have shape (2, d) for Sobol sampling."
- # if bounds are long, make them float
- ifself.bounds.dtype==torch.long:
- self.bounds=self.bounds.float()
- # Generate Sobol samples within the unit cube [0,1]^d and rescale to [bounds[0], bounds[1]]
- inducing_points=draw_sobol_samples(
- bounds=self.bounds,n=num_inducing,q=1
- ).squeeze()
-
- # Ensure correct shape in case Sobol sampling returns a 1D tensor
- ifinducing_points.ndim==1:
- inducing_points=inducing_points.view(-1,1)
- self.allocator_used="SobolAllocator"
- returninducing_points
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the SobolAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the SobolAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- return{"bounds":bounds}
-
-
-
[docs]classKMeansAllocator(BaseAllocator):
-"""An inducing point allocator that uses k-means++ to allocate inducing points."""
-
- def__init__(self,bounds:Optional[torch.Tensor]=None)->None:
-"""Initialize the KMeansAllocator."""
- super().__init__(bounds=bounds)
- ifboundsisnotNone:
- self.bounds=bounds
- self.dummy_allocator=DummyAllocator(bounds)
-
- def_get_quality_function(self)->None:
-"""K-means++ does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""
- Generates `num_inducing` inducing points using k-means++ initialization on the input data.
-
- Args:
- inputs (torch.Tensor): A tensor of shape (n, d) containing the input data.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: A (num_inducing, d)-dimensional tensor of inducing points selected via k-means++.
- """
- ifinputsisNoneandself.boundsisnotNone:
- self.allocator_used=self.dummy_allocator.__class__.__name__
- returnself.dummy_allocator.allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
- elifinputsisNoneandself.boundsisNone:
- raiseValueError("Either inputs or bounds must be provided.")
- # Ensure inputs are unique to avoid duplication issues with k-means++
- unique_inputs=torch.unique(inputs,dim=0)
-
- # If unique inputs are less than or equal to the required inducing points, return them directly
- ifunique_inputs.shape[0]<=num_inducing:
- self.allocator_used=self.__class__.__name__
- returnunique_inputs
-
- # Run k-means++ on the unique inputs to select inducing points
- inducing_points=torch.tensor(
- kmeans2(unique_inputs.cpu().numpy(),num_inducing,minit="++")[0]
- )
- self.allocator_used=self.__class__.__name__
- returninducing_points
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the KMeansAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the KMeansAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- return{"bounds":bounds}
-
-
-
[docs]classDummyAllocator(BaseAllocator):
- def__init__(self,bounds:torch.Tensor)->None:
-"""Initialize the DummyAllocator with bounds.
-
- Args:
- bounds (torch.Tensor): Bounds for allocating points. Should be of shape (2, d).
- """
- super().__init__(bounds=bounds)
- self.bounds:torch.Tensor=bounds
-
- def_get_quality_function(self)->None:
-"""DummyAllocator does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""Allocate inducing points by returning zeros of the appropriate shape.
-
- Args:
- inputs (torch.Tensor): Input tensor, not required for DummyAllocator.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: A (num_inducing, d)-dimensional tensor of zeros.
- """
- self.allocator_used=self.__class__.__name__
- returntorch.zeros(num_inducing,self.bounds.shape[-1])
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the DummyAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the DummyAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- return{"bounds":bounds}
-
-
-
[docs]classAutoAllocator(BaseAllocator):
-"""An inducing point allocator that dynamically chooses an allocation strategy
- based on the number of unique data points available."""
-
- def__init__(
- self,
- bounds:Optional[torch.Tensor]=None,
- fallback_allocator:InducingPointAllocator=KMeansAllocator(),
- )->None:
-"""
- Initialize the AutoAllocator with a fallback allocator.
-
- Args:
- fallback_allocator (InducingPointAllocator, optional): Allocator to use if there are
- more unique points than required.
- """
- super().__init__(bounds=bounds)
- self.fallback_allocator=fallback_allocator
- ifboundsisnotNone:
- self.bounds=bounds
- self.dummy_allocator=DummyAllocator(bounds=bounds)
-
- def_get_quality_function(self)->None:
-"""AutoAllocator does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor],
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""
- Allocate inducing points by either using the unique input data directly
- or falling back to another allocation method if there are too many unique points.
-
- Args:
- inputs (torch.Tensor): A tensor of shape (n, d) containing the input data.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: A (num_inducing, d)-dimensional tensor of inducing points.
- """
- # Ensure inputs are not None
- ifinputsisNoneandself.boundsisnotNone:
- self.allocator_used=self.dummy_allocator.__class__.__name__
- returnself.dummy_allocator.allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
- elifinputsisNoneandself.boundsisNone:
- raiseValueError(f"Either inputs or bounds must be provided.{self.bounds}")
-
- assert(
- inputsisnotNone
- ),"inputs should not be None here"# to make mypy happy
-
- unique_inputs=torch.unique(inputs,dim=0)
-
- # If there are fewer unique points than required, return unique inputs directly
- ifunique_inputs.shape[0]<=num_inducing:
- self.allocator_used=self.__class__.__name__
- returnunique_inputs
-
- # Otherwise, fall back to the provided allocator (e.g., KMeansAllocator)
- ifinputs.shape[0]<=num_inducing:
- self.allocator_used=self.__class__.__name__
- returninputs
- else:
- self.allocator_used=self.fallback_allocator.__class__.__name__
- returnself.fallback_allocator.allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the AutoAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the AutoAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- fallback_allocator_cls=config.getobj(
- name,"fallback_allocator",fallback=KMeansAllocator
- )
- fallback_allocator=(
- fallback_allocator_cls.from_config(config)
- ifhasattr(fallback_allocator_cls,"from_config")
- elsefallback_allocator_cls()
- )
-
- return{"fallback_allocator":fallback_allocator,"bounds":bounds}
-
-
-
[docs]classFixedAllocator(BaseAllocator):
- def__init__(
- self,points:torch.Tensor,bounds:Optional[torch.Tensor]=None
- )->None:
-"""Initialize the FixedAllocator with inducing points and bounds.
-
- Args:
- points (torch.Tensor): Inducing points to use.
- bounds (torch.Tensor, optional): Bounds for allocating points. Should be of shape (2, d).
- """
- super().__init__(bounds=bounds)
- self.points=points
-
- def_get_quality_function(self)->None:
-"""FixedAllocator does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""Allocate inducing points by returning the fixed inducing points.
-
- Args:
- inputs (torch.Tensor): Input tensor, not required for FixedAllocator.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: The fixed inducing points.
- """
- self.allocator_used=self.__class__.__name__
- returnself.points
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the FixedAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the FixedAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- num_inducing=config.getint("common","num_inducing",fallback=99)
- fallback_allocator=config.getobj(
- name,"fallback_allocator",fallback=DummyAllocator(bounds=bounds)
- )
- points=config.gettensor(
- name,
- "points",
- fallback=fallback_allocator.allocate_inducing_points(
- num_inducing=num_inducing
- ),
- )
- return{"points":points,"bounds":bounds}
-
-
-
[docs]classGreedyVarianceReduction(BaseGreedyVarianceReduction,ConfigurableMixin):
- def__init__(self,bounds:Optional[torch.Tensor]=None)->None:
-"""Initialize the GreedyVarianceReduction with bounds.
-
- Args:
- bounds (torch.Tensor, optional): Bounds for allocating points. Should be of shape (2, d).
- """
- super().__init__()
-
- self.bounds=bounds
- ifboundsisnotNone:
- self.dummy_allocator=DummyAllocator(bounds)
- self.dim=self._initialize_dim()
-
- def_initialize_dim(self)->Optional[int]:
-"""Initialize the dimension `dim` based on the bounds, if available.
-
- Returns:
- int: The dimension `d` if bounds are provided, or None otherwise.
- """
- ifself.boundsisnotNone:
- assertself.bounds.shape[0]==2,"Bounds must have shape (2, d)!"
- lb,ub=self.bounds[0],self.bounds[1]
- fori,(l,u)inenumerate(zip(lb,ub)):
- assert(
- l<=u
- ),f"Lower bound {l} is not less than or equal to upper bound {u} on dimension {i}!"
- returnself.bounds.shape[1]
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""Allocate inducing points using the GreedyVarianceReduction strategy.
-
- Args:
- inputs (torch.Tensor): Input tensor, not required for GreedyVarianceReduction.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: The allocated inducing points.
- """
- ifinputsisNoneandself.boundsisnotNone:
- self.allocator_used=self.dummy_allocator.__class__.__name__
- returnself.dummy_allocator.allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
- elifinputsisNoneandself.boundsisNone:
- raiseValueError("Either inputs or bounds must be provided.")
- else:
- self.allocator_used=self.__class__.__name__
- returnsuper().allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the GreedyVarianceReduction allocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the GreedyVarianceReduction allocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- return{"bounds":bounds}
\ No newline at end of file
diff --git a/api/_modules/aepsych/models/inducing_point_allocators/index.html b/api/_modules/aepsych/models/inducing_point_allocators/index.html
deleted file mode 100644
index c56bf9b1c..000000000
--- a/api/_modules/aepsych/models/inducing_point_allocators/index.html
+++ /dev/null
@@ -1,671 +0,0 @@
-AEPsych · Adaptive experimentation for human perception and perceptually-informed outcomes
Source code for aepsych.models.inducing_point_allocators
-#!/usr/bin/env python3
-# Copyright (c) Facebook, Inc. and its affiliates.
-# All rights reserved.
-
-# This source code is licensed under the license found in the
-# LICENSE file in the root directory of this source tree.
-
-fromabcimportABC,abstractmethod
-fromtypingimportAny,Dict,Optional,Union
-
-importnumpyasnp
-importtorch
-
-fromaepsych.configimportConfig,ConfigurableMixin
-fromaepsych.utilsimportget_bounds
-frombotorch.models.utils.inducing_point_allocatorsimport(
- GreedyVarianceReductionasBaseGreedyVarianceReduction,
- InducingPointAllocator,
-)
-frombotorch.utils.samplingimportdraw_sobol_samples
-fromscipy.cluster.vqimportkmeans2
-
-
-classBaseAllocator(InducingPointAllocator,ConfigurableMixin):
-"""Base class for inducing point allocators."""
-
- def__init__(self,bounds:Optional[torch.Tensor]=None)->None:
-"""
- Initialize the allocator with optional bounds.
-
- Args:
- bounds (torch.Tensor, optional): Bounds for allocating points. Should be of shape (2, d).
- """
- self.bounds=bounds
- self.dim=self._initialize_dim()
-
- def_initialize_dim(self)->Optional[int]:
-"""
- Initialize the dimension `dim` based on the bounds, if available.
-
- Returns:
- int: The dimension `d` if bounds are provided, or None otherwise.
- """
- ifself.boundsisnotNone:
- # Validate bounds and extract dimension
- assertself.bounds.shape[0]==2,"Bounds must have shape (2, d)!"
- lb,ub=self.bounds[0],self.bounds[1]
- fori,(l,u)inenumerate(zip(lb,ub)):
- assert(
- l<=u
- ),f"Lower bound {l} is not less than or equal to upper bound {u} on dimension {i}!"
- returnself.bounds.shape[1]# Number of dimensions (d)
- returnNone
-
- def_determine_dim_from_inputs(self,inputs:torch.Tensor)->int:
-"""
- Determine dimension `dim` from the inputs tensor.
-
- Args:
- inputs (torch.Tensor): Input tensor of shape (..., d).
-
- Returns:
- int: The inferred dimension `d`.
- """
- returninputs.shape[-1]
-
- @abstractmethod
- defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor],
- covar_module:Optional[torch.nn.Module],
- num_inducing:int,
- input_batch_shape:torch.Size,
- )->torch.Tensor:
-"""
- Abstract method for allocating inducing points.
-
- Args:
- inputs (torch.Tensor, optional): Input tensor, implementation-specific.
- covar_module (torch.nn.Module, optional): Kernel covariance module.
- num_inducing (int): Number of inducing points to allocate.
- input_batch_shape (torch.Size): Shape of the input batch.
-
- Returns:
- torch.Tensor: Allocated inducing points.
- """
- ifself.dimisNoneandinputsisnotNone:
- self.dim=self._determine_dim_from_inputs(inputs)
-
- raiseNotImplementedError("This method should be implemented by subclasses.")
-
- @abstractmethod
- def_get_quality_function(self)->Optional[Any]:
-"""
- Abstract method for returning a quality function if required.
-
- Returns:
- None or Callable: Quality function if needed.
- """
- raiseNotImplementedError("This method should be implemented by subclasses.")
-
-
-
[docs]classSobolAllocator(BaseAllocator):
-"""An inducing point allocator that uses Sobol sequences to allocate inducing points."""
-
- def__init__(self,bounds:torch.Tensor)->None:
-"""Initialize the SobolAllocator with bounds."""
- self.bounds:torch.Tensor=bounds
- super().__init__(bounds=bounds)
-
- def_get_quality_function(self)->None:
-"""Sobol sampling does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""
- Generates `num_inducing` inducing points within the specified bounds using Sobol sampling.
-
- Args:
- inputs (torch.Tensor): Input tensor, not required for Sobol sampling.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
-
- Returns:
- torch.Tensor: A (num_inducing, d)-dimensional tensor of inducing points within the specified bounds.
-
- Raises:
- ValueError: If `bounds` is not provided.
- """
-
- # Validate bounds shape
- assert(
- self.bounds.shape[0]==2
- ),"Bounds must have shape (2, d) for Sobol sampling."
- # if bounds are long, make them float
- ifself.bounds.dtype==torch.long:
- self.bounds=self.bounds.float()
- # Generate Sobol samples within the unit cube [0,1]^d and rescale to [bounds[0], bounds[1]]
- inducing_points=draw_sobol_samples(
- bounds=self.bounds,n=num_inducing,q=1
- ).squeeze()
-
- # Ensure correct shape in case Sobol sampling returns a 1D tensor
- ifinducing_points.ndim==1:
- inducing_points=inducing_points.view(-1,1)
- self.allocator_used="SobolAllocator"
- returninducing_points
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the SobolAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the SobolAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- return{"bounds":bounds}
-
-
-
[docs]classKMeansAllocator(BaseAllocator):
-"""An inducing point allocator that uses k-means++ to allocate inducing points."""
-
- def__init__(self,bounds:Optional[torch.Tensor]=None)->None:
-"""Initialize the KMeansAllocator."""
- super().__init__(bounds=bounds)
- ifboundsisnotNone:
- self.bounds=bounds
- self.dummy_allocator=DummyAllocator(bounds)
-
- def_get_quality_function(self)->None:
-"""K-means++ does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""
- Generates `num_inducing` inducing points using k-means++ initialization on the input data.
-
- Args:
- inputs (torch.Tensor): A tensor of shape (n, d) containing the input data.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: A (num_inducing, d)-dimensional tensor of inducing points selected via k-means++.
- """
- ifinputsisNoneandself.boundsisnotNone:
- self.allocator_used=self.dummy_allocator.__class__.__name__
- returnself.dummy_allocator.allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
- elifinputsisNoneandself.boundsisNone:
- raiseValueError("Either inputs or bounds must be provided.")
- # Ensure inputs are unique to avoid duplication issues with k-means++
- unique_inputs=torch.unique(inputs,dim=0)
-
- # If unique inputs are less than or equal to the required inducing points, return them directly
- ifunique_inputs.shape[0]<=num_inducing:
- self.allocator_used=self.__class__.__name__
- returnunique_inputs
-
- # Run k-means++ on the unique inputs to select inducing points
- inducing_points=torch.tensor(
- kmeans2(unique_inputs.cpu().numpy(),num_inducing,minit="++")[0]
- )
- self.allocator_used=self.__class__.__name__
- returninducing_points
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the KMeansAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the KMeansAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- return{"bounds":bounds}
-
-
-
[docs]classDummyAllocator(BaseAllocator):
- def__init__(self,bounds:torch.Tensor)->None:
-"""Initialize the DummyAllocator with bounds.
-
- Args:
- bounds (torch.Tensor): Bounds for allocating points. Should be of shape (2, d).
- """
- super().__init__(bounds=bounds)
- self.bounds:torch.Tensor=bounds
-
- def_get_quality_function(self)->None:
-"""DummyAllocator does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""Allocate inducing points by returning zeros of the appropriate shape.
-
- Args:
- inputs (torch.Tensor): Input tensor, not required for DummyAllocator.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: A (num_inducing, d)-dimensional tensor of zeros.
- """
- self.allocator_used=self.__class__.__name__
- returntorch.zeros(num_inducing,self.bounds.shape[-1])
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the DummyAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the DummyAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- return{"bounds":bounds}
-
-
-
[docs]classAutoAllocator(BaseAllocator):
-"""An inducing point allocator that dynamically chooses an allocation strategy
- based on the number of unique data points available."""
-
- def__init__(
- self,
- bounds:Optional[torch.Tensor]=None,
- fallback_allocator:InducingPointAllocator=KMeansAllocator(),
- )->None:
-"""
- Initialize the AutoAllocator with a fallback allocator.
-
- Args:
- fallback_allocator (InducingPointAllocator, optional): Allocator to use if there are
- more unique points than required.
- """
- super().__init__(bounds=bounds)
- self.fallback_allocator=fallback_allocator
- ifboundsisnotNone:
- self.bounds=bounds
- self.dummy_allocator=DummyAllocator(bounds=bounds)
-
- def_get_quality_function(self)->None:
-"""AutoAllocator does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor],
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""
- Allocate inducing points by either using the unique input data directly
- or falling back to another allocation method if there are too many unique points.
-
- Args:
- inputs (torch.Tensor): A tensor of shape (n, d) containing the input data.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: A (num_inducing, d)-dimensional tensor of inducing points.
- """
- # Ensure inputs are not None
- ifinputsisNoneandself.boundsisnotNone:
- self.allocator_used=self.dummy_allocator.__class__.__name__
- returnself.dummy_allocator.allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
- elifinputsisNoneandself.boundsisNone:
- raiseValueError(f"Either inputs or bounds must be provided.{self.bounds}")
-
- assert(
- inputsisnotNone
- ),"inputs should not be None here"# to make mypy happy
-
- unique_inputs=torch.unique(inputs,dim=0)
-
- # If there are fewer unique points than required, return unique inputs directly
- ifunique_inputs.shape[0]<=num_inducing:
- self.allocator_used=self.__class__.__name__
- returnunique_inputs
-
- # Otherwise, fall back to the provided allocator (e.g., KMeansAllocator)
- ifinputs.shape[0]<=num_inducing:
- self.allocator_used=self.__class__.__name__
- returninputs
- else:
- self.allocator_used=self.fallback_allocator.__class__.__name__
- returnself.fallback_allocator.allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the AutoAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the AutoAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- fallback_allocator_cls=config.getobj(
- name,"fallback_allocator",fallback=KMeansAllocator
- )
- fallback_allocator=(
- fallback_allocator_cls.from_config(config)
- ifhasattr(fallback_allocator_cls,"from_config")
- elsefallback_allocator_cls()
- )
-
- return{"fallback_allocator":fallback_allocator,"bounds":bounds}
-
-
-
[docs]classFixedAllocator(BaseAllocator):
- def__init__(
- self,points:torch.Tensor,bounds:Optional[torch.Tensor]=None
- )->None:
-"""Initialize the FixedAllocator with inducing points and bounds.
-
- Args:
- points (torch.Tensor): Inducing points to use.
- bounds (torch.Tensor, optional): Bounds for allocating points. Should be of shape (2, d).
- """
- super().__init__(bounds=bounds)
- self.points=points
-
- def_get_quality_function(self)->None:
-"""FixedAllocator does not require a quality function, so this returns None."""
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""Allocate inducing points by returning the fixed inducing points.
-
- Args:
- inputs (torch.Tensor): Input tensor, not required for FixedAllocator.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: The fixed inducing points.
- """
- self.allocator_used=self.__class__.__name__
- returnself.points
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the FixedAllocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the FixedAllocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- num_inducing=config.getint("common","num_inducing",fallback=99)
- fallback_allocator=config.getobj(
- name,"fallback_allocator",fallback=DummyAllocator(bounds=bounds)
- )
- points=config.gettensor(
- name,
- "points",
- fallback=fallback_allocator.allocate_inducing_points(
- num_inducing=num_inducing
- ),
- )
- return{"points":points,"bounds":bounds}
-
-
-
[docs]classGreedyVarianceReduction(BaseGreedyVarianceReduction,ConfigurableMixin):
- def__init__(self,bounds:Optional[torch.Tensor]=None)->None:
-"""Initialize the GreedyVarianceReduction with bounds.
-
- Args:
- bounds (torch.Tensor, optional): Bounds for allocating points. Should be of shape (2, d).
- """
- super().__init__()
-
- self.bounds=bounds
- ifboundsisnotNone:
- self.dummy_allocator=DummyAllocator(bounds)
- self.dim=self._initialize_dim()
-
- def_initialize_dim(self)->Optional[int]:
-"""Initialize the dimension `dim` based on the bounds, if available.
-
- Returns:
- int: The dimension `d` if bounds are provided, or None otherwise.
- """
- ifself.boundsisnotNone:
- assertself.bounds.shape[0]==2,"Bounds must have shape (2, d)!"
- lb,ub=self.bounds[0],self.bounds[1]
- fori,(l,u)inenumerate(zip(lb,ub)):
- assert(
- l<=u
- ),f"Lower bound {l} is not less than or equal to upper bound {u} on dimension {i}!"
- returnself.bounds.shape[1]
- returnNone
-
-
[docs]defallocate_inducing_points(
- self,
- inputs:Optional[torch.Tensor]=None,
- covar_module:Optional[torch.nn.Module]=None,
- num_inducing:int=10,
- input_batch_shape:torch.Size=torch.Size([]),
- )->torch.Tensor:
-"""Allocate inducing points using the GreedyVarianceReduction strategy.
-
- Args:
- inputs (torch.Tensor): Input tensor, not required for GreedyVarianceReduction.
- covar_module (torch.nn.Module, optional): Kernel covariance module; included for API compatibility, but not used here.
- num_inducing (int, optional): The number of inducing points to generate. Defaults to 10.
- input_batch_shape (torch.Size, optional): Batch shape, defaults to an empty size; included for API compatibility, but not used here.
-
- Returns:
- torch.Tensor: The allocated inducing points.
- """
- ifinputsisNoneandself.boundsisnotNone:
- self.allocator_used=self.dummy_allocator.__class__.__name__
- returnself.dummy_allocator.allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
- elifinputsisNoneandself.boundsisNone:
- raiseValueError("Either inputs or bounds must be provided.")
- else:
- self.allocator_used=self.__class__.__name__
- returnsuper().allocate_inducing_points(
- inputs=inputs,
- covar_module=covar_module,
- num_inducing=num_inducing,
- input_batch_shape=input_batch_shape,
- )
-
-
[docs]@classmethod
- defget_config_options(
- cls,
- config:Config,
- name:Optional[str]=None,
- options:Optional[Dict[str,Any]]=None,
- )->Dict[str,Any]:
-"""Get configuration options for the GreedyVarianceReduction allocator.
-
- Args:
- config (Config): Configuration object.
- name (str, optional): Name of the allocator, defaults to None.
- options (Dict[str, Any], optional): Additional options, defaults to None.
-
- Returns:
- Dict[str, Any]: Configuration options for the GreedyVarianceReduction allocator.
- """
- ifnameisNone:
- name=cls.__name__
- lb=config.gettensor("common","lb")
- ub=config.gettensor("common","ub")
- bounds=torch.stack((lb,ub))
- return{"bounds":bounds}
\ No newline at end of file
diff --git a/api/_modules/aepsych/models/monotonic_projection_gp.html b/api/_modules/aepsych/models/monotonic_projection_gp.html
index 739195649..f8ee6dda0 100644
--- a/api/_modules/aepsych/models/monotonic_projection_gp.html
+++ b/api/_modules/aepsych/models/monotonic_projection_gp.html
@@ -33,9 +33,9 @@
Source code for aepsych.models.monotonic_projection_gp
Source code for aepsych.models.monotonic_projection_gp
self,lb:torch.Tensor,ub:torch.Tensor,
- inducing_point_method:InducingPointAllocator,
+ dim:int,monotonic_dims:List[int],monotonic_grid_size:int=20,min_f_val:Optional[float]=None,
- dim:Optional[int]=None,mean_module:Optional[gpytorch.means.Mean]=None,covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Likelihood]=None,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,)->None:
-"""Initialize the MonotonicProjectionGP model.
+"""Initialize the MonotonicProjectionGP model. Unlike other models, this model needs bounds. Args: lb (torch.Tensor): Lower bounds of the parameters. ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method for allocating inducing points.
+ dim (int, optional): The number of dimensions in the parameter space. monotonic_dims (List[int]): A list of the dimensions on which monotonicity should be enforced. monotonic_grid_size (int): The size of the grid, s, in 1. above. Defaults to 20. min_f_val (float, optional): If provided, maintains this minimum in the projection in 5. Defaults to None.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub. Defaults to None. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. Defaults to None. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a gamma prior. Defaults to None. likelihood (Likelihood, optional): The likelihood function to use. If None defaults to Gaussian likelihood. Defaults to None.
- inducing_size (int, optional): The number of inducing points to use. Defaults to None.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): The number of inducing points to use. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. Defaults to None. """
@@ -152,10 +152,10 @@
Source code for aepsych.models.monotonic_projection_gp
Source code for aepsych.models.monotonic_projection_gp
self,lb:torch.Tensor,ub:torch.Tensor,
- inducing_point_method:InducingPointAllocator,
+ dim:int,monotonic_dims:List[int],monotonic_grid_size:int=20,min_f_val:Optional[float]=None,
- dim:Optional[int]=None,mean_module:Optional[gpytorch.means.Mean]=None,covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Likelihood]=None,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,)->None:
-"""Initialize the MonotonicProjectionGP model.
+"""Initialize the MonotonicProjectionGP model. Unlike other models, this model needs bounds. Args: lb (torch.Tensor): Lower bounds of the parameters. ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method for allocating inducing points.
+ dim (int, optional): The number of dimensions in the parameter space. monotonic_dims (List[int]): A list of the dimensions on which monotonicity should be enforced. monotonic_grid_size (int): The size of the grid, s, in 1. above. Defaults to 20. min_f_val (float, optional): If provided, maintains this minimum in the projection in 5. Defaults to None.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub. Defaults to None. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. Defaults to None. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a gamma prior. Defaults to None. likelihood (Likelihood, optional): The likelihood function to use. If None defaults to Gaussian likelihood. Defaults to None.
- inducing_size (int, optional): The number of inducing points to use. Defaults to None.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): The number of inducing points to use. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. Defaults to None. """
@@ -152,10 +152,10 @@
Source code for aepsych.models.monotonic_projection_gp
Source code for aepsych.models.monotonic_rejection_gp
from gpytorch.modelsimportApproximateGPfromgpytorch.variationalimportCholeskyVariationalDistribution,VariationalStrategyfromscipy.statsimportnorm
-fromtorchimportTensor
Source code for aepsych.models.monotonic_rejection_gp
num_samples (int): Number of samples for estimating posterior on preDict or
acquisition function evaluation. Defaults to 250. num_rejection_samples (int): Number of samples used for rejection sampling. Defaults to 4096.
- inducing_point_method (InducingPointAllocator): Method for selecting inducing points. Defaults to AutoAllocator().
+ inducing_point_method (InducingPointAllocator, optional): Method for selecting inducing points. If not set,
+ a GreedyVarianceReduction is created. optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during fitting. Assumes we are using L-BFGS-B. """
@@ -115,11 +110,12 @@
Source code for aepsych.models.monotonic_rejection_gp
Source code for aepsych.models.monotonic_rejection_gp
from gpytorch.modelsimportApproximateGPfromgpytorch.variationalimportCholeskyVariationalDistribution,VariationalStrategyfromscipy.statsimportnorm
-fromtorchimportTensor
Source code for aepsych.models.monotonic_rejection_gp
num_samples (int): Number of samples for estimating posterior on preDict or
acquisition function evaluation. Defaults to 250. num_rejection_samples (int): Number of samples used for rejection sampling. Defaults to 4096.
- inducing_point_method (InducingPointAllocator): Method for selecting inducing points. Defaults to AutoAllocator().
+ inducing_point_method (InducingPointAllocator, optional): Method for selecting inducing points. If not set,
+ a GreedyVarianceReduction is created. optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during fitting. Assumes we are using L-BFGS-B. """
@@ -115,11 +110,12 @@
Source code for aepsych.models.monotonic_rejection_gp
def__init__(self,
- lb:torch.Tensor,
- ub:torch.Tensor,
- inducing_point_method:InducingPointAllocator,
- dim:Optional[int]=None,
+ dim:int,stim_dim:int=0,mean_module:Optional[gpytorch.means.Mean]=None,covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Any]=None,slope_mean:float=2,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,)->None:""" Initialize SemiParametricGP. Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method to use to select the inducing points.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub. Defaults to None.
+ dim (int, optional): The number of dimensions in the parameter space. stim_dim (int): Index of the intensity (monotonic) dimension. Defaults to 0. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a
@@ -297,16 +292,18 @@
Source code for aepsych.models.semi_p
likelihood (gpytorch.likelihood.Likelihood, optional): The likelihood function to use. If None defaults to linear-Bernouli likelihood with probit link. slope_mean (float): The mean of the slope. Defaults to 2.
- inducing_size (int, optional): Number of inducing points. Defaults to 99.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): Number of inducing points. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during fitting. Assumes we are using L-BFGS-B. """
- lb,ub,dim=_process_bounds(lb,ub,dim)
+ self.dim=dimself.stim_dim=stim_dim
- self.context_dims=list(range(dim))
+ self.context_dims=list(range(self.dim))self.context_dims.pop(stim_dim)ifmean_moduleisNone:
@@ -319,7 +316,7 @@
Source code for aepsych.models.semi_p
ifcovar_moduleisNone:covar_module=ScaleKernel(RBFKernel(
- ard_num_dims=dim-1,
+ ard_num_dims=self.dim-1,lengthscale_prior=GammaPrior(3,6),active_dims=self.context_dims,# Operate only on x_sbatch_shape=torch.Size([2]),
@@ -331,11 +328,8 @@
Source code for aepsych.models.semi_p
assertisinstance(likelihood,LinearBernoulliLikelihood),"SemiP model only supports linear Bernoulli likelihoods!"
- self.inducing_point_method=inducing_point_methodsuper().__init__(
- lb=lb,
- ub=ub,dim=dim,mean_module=mean_module,covar_module=covar_module,
@@ -361,16 +355,17 @@
Source code for aepsych.models.semi_p
"""classname=cls.__name__
- inducing_size=config.getint(classname,"inducing_size",fallback=None)
+ inducing_size=config.getint(classname,"inducing_size",fallback=100)
- lb=config.gettensor(classname,"lb")
- ub=config.gettensor(classname,"ub")dim=config.getint(classname,"dim",fallback=None)
+ ifdimisNone:
+ dim=get_dims(config)
+
max_fit_time=config.getfloat(classname,"max_fit_time",fallback=None)inducing_point_method_class=config.getobj(
- classname,"inducing_point_method",fallback=AutoAllocator
+ classname,"inducing_point_method",fallback=GreedyVarianceReduction)# Check if allocator class has a `from_config` methodifhasattr(inducing_point_method_class,"from_config"):
@@ -393,8 +388,6 @@
offset_covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Likelihood]=None,slope_mean:float=2,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,)->None:""" Initialize HadamardSemiPModel. Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method to use to select the inducing points.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub.
+ dim (int): The number of dimensions in the parameter space. stim_dim (int): Index of the intensity (monotonic) dimension. Defaults to 0. slope_mean_module (gpytorch.means.Mean, optional): Mean module to use (default: constant mean) for slope. slope_covar_module (gpytorch.kernels.Kernel, optional): Covariance kernel to use (default: scaled RBF) for slope.
@@ -567,16 +554,15 @@
Source code for aepsych.models.semi_p
offset_covar_module (gpytorch.kernels.Kernel, optional): Covariance kernel to use (default: scaled RBF) for offset. likelihood (gpytorch.likelihood.Likelihood, optional)): defaults to bernoulli with logistic input and a floor of .5 slope_mean (float): The mean of the slope. Defaults to 2.
- inducing_size (int, optional): Number of inducing points. Defaults to 99.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): Number of inducing points. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during fitting. Assumes we are using L-BFGS-B. """
- self.inducing_point_method=inducing_point_methodsuper().__init__(
- lb=lb,
- ub=ub,dim=dim,inducing_size=inducing_size,max_fit_time=max_fit_time,
@@ -674,11 +660,11 @@
max_fit_time=config.getfloat(classname,"max_fit_time",fallback=None)inducing_point_method_class=config.getobj(
- classname,"inducing_point_method",fallback=AutoAllocator
+ classname,"inducing_point_method",fallback=GreedyVarianceReduction)# Check if allocator class has a `from_config` methodifhasattr(inducing_point_method_class,"from_config"):
@@ -714,8 +700,6 @@
def__init__(self,
- lb:torch.Tensor,
- ub:torch.Tensor,
- inducing_point_method:InducingPointAllocator,
- dim:Optional[int]=None,
+ dim:int,stim_dim:int=0,mean_module:Optional[gpytorch.means.Mean]=None,covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Any]=None,slope_mean:float=2,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,)->None:""" Initialize SemiParametricGP. Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method to use to select the inducing points.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub. Defaults to None.
+ dim (int, optional): The number of dimensions in the parameter space. stim_dim (int): Index of the intensity (monotonic) dimension. Defaults to 0. mean_module (gpytorch.means.Mean, optional): GP mean class. Defaults to a constant with a normal prior. covar_module (gpytorch.kernels.Kernel, optional): GP covariance kernel class. Defaults to scaled RBF with a
@@ -297,16 +292,18 @@
Source code for aepsych.models.semi_p
likelihood (gpytorch.likelihood.Likelihood, optional): The likelihood function to use. If None defaults to linear-Bernouli likelihood with probit link. slope_mean (float): The mean of the slope. Defaults to 2.
- inducing_size (int, optional): Number of inducing points. Defaults to 99.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): Number of inducing points. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during fitting. Assumes we are using L-BFGS-B. """
- lb,ub,dim=_process_bounds(lb,ub,dim)
+ self.dim=dimself.stim_dim=stim_dim
- self.context_dims=list(range(dim))
+ self.context_dims=list(range(self.dim))self.context_dims.pop(stim_dim)ifmean_moduleisNone:
@@ -319,7 +316,7 @@
Source code for aepsych.models.semi_p
ifcovar_moduleisNone:covar_module=ScaleKernel(RBFKernel(
- ard_num_dims=dim-1,
+ ard_num_dims=self.dim-1,lengthscale_prior=GammaPrior(3,6),active_dims=self.context_dims,# Operate only on x_sbatch_shape=torch.Size([2]),
@@ -331,11 +328,8 @@
Source code for aepsych.models.semi_p
assertisinstance(likelihood,LinearBernoulliLikelihood),"SemiP model only supports linear Bernoulli likelihoods!"
- self.inducing_point_method=inducing_point_methodsuper().__init__(
- lb=lb,
- ub=ub,dim=dim,mean_module=mean_module,covar_module=covar_module,
@@ -361,16 +355,17 @@
Source code for aepsych.models.semi_p
"""classname=cls.__name__
- inducing_size=config.getint(classname,"inducing_size",fallback=None)
+ inducing_size=config.getint(classname,"inducing_size",fallback=100)
- lb=config.gettensor(classname,"lb")
- ub=config.gettensor(classname,"ub")dim=config.getint(classname,"dim",fallback=None)
+ ifdimisNone:
+ dim=get_dims(config)
+
max_fit_time=config.getfloat(classname,"max_fit_time",fallback=None)inducing_point_method_class=config.getobj(
- classname,"inducing_point_method",fallback=AutoAllocator
+ classname,"inducing_point_method",fallback=GreedyVarianceReduction)# Check if allocator class has a `from_config` methodifhasattr(inducing_point_method_class,"from_config"):
@@ -393,8 +388,6 @@
offset_covar_module:Optional[gpytorch.kernels.Kernel]=None,likelihood:Optional[Likelihood]=None,slope_mean:float=2,
- inducing_size:Optional[int]=None,
+ inducing_point_method:Optional[InducingPointAllocator]=None,
+ inducing_size:int=100,max_fit_time:Optional[float]=None,optimizer_options:Optional[Dict[str,Any]]=None,)->None:""" Initialize HadamardSemiPModel. Args:
- lb (torch.Tensor): Lower bounds of the parameters.
- ub (torch.Tensor): Upper bounds of the parameters.
- inducing_point_method (InducingPointAllocator): The method to use to select the inducing points.
- dim (int, optional): The number of dimensions in the parameter space. If None, it is inferred from the size
- of lb and ub.
+ dim (int): The number of dimensions in the parameter space. stim_dim (int): Index of the intensity (monotonic) dimension. Defaults to 0. slope_mean_module (gpytorch.means.Mean, optional): Mean module to use (default: constant mean) for slope. slope_covar_module (gpytorch.kernels.Kernel, optional): Covariance kernel to use (default: scaled RBF) for slope.
@@ -567,16 +554,15 @@
Source code for aepsych.models.semi_p
offset_covar_module (gpytorch.kernels.Kernel, optional): Covariance kernel to use (default: scaled RBF) for offset. likelihood (gpytorch.likelihood.Likelihood, optional)): defaults to bernoulli with logistic input and a floor of .5 slope_mean (float): The mean of the slope. Defaults to 2.
- inducing_size (int, optional): Number of inducing points. Defaults to 99.
+ inducing_point_method (InducingPointAllocator, optional): The method to use for selecting inducing points.
+ If not set, a GreedyVarianceReduction is made.
+ inducing_size (int): Number of inducing points. Defaults to 100. max_fit_time (float, optional): The maximum amount of time, in seconds, to spend fitting the model. If None, there is no limit to the fitting time. optimizer_options (Dict[str, Any], optional): Optimizer options to pass to the SciPy optimizer during fitting. Assumes we are using L-BFGS-B. """
- self.inducing_point_method=inducing_point_methodsuper().__init__(
- lb=lb,
- ub=ub,dim=dim,inducing_size=inducing_size,max_fit_time=max_fit_time,
@@ -674,11 +660,11 @@
max_fit_time=config.getfloat(classname,"max_fit_time",fallback=None)inducing_point_method_class=config.getobj(
- classname,"inducing_point_method",fallback=AutoAllocator
+ classname,"inducing_point_method",fallback=GreedyVarianceReduction)# Check if allocator class has a `from_config` methodifhasattr(inducing_point_method_class,"from_config"):
@@ -714,8 +700,6 @@