Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[GENERAL SUPPORT]: Implementation of Evolution-guided BO #3198

Open
1 task done
VMLC-PV opened this issue Dec 19, 2024 · 1 comment
Open
1 task done

[GENERAL SUPPORT]: Implementation of Evolution-guided BO #3198

VMLC-PV opened this issue Dec 19, 2024 · 1 comment
Assignees
Labels
question Further information is requested

Comments

@VMLC-PV
Copy link

VMLC-PV commented Dec 19, 2024

Question

I was thinking of implementing the Evolution-guided BO as described in this paper and I thought that it would make sense to write something with a similar structure to the SEBOAcquisition class and with a .optimize similar to the EGBO implemented in this repo.
To do so, I need to pull back the untransformed X_observed and the corresponding metrics.
I went down the rabbit hole and explored the attributes of many of the objects passed to SEBOAcquisition but could not find what I needed. Any ideas?

I know I could use the original codes here but I was hoping to write something that is more 'Ax-like'.
For context, I print out below the code from the repo implementing the EGBO.

def optimize_st_egbo(acq_func, x, y, batch_size):
	# for st qnehvi
	
	pareto_mask = is_non_dominated(y)
	pareto_x = x[pareto_mask].cpu().numpy()
	
	problem = PymooProblem(n_var=x.shape[1], n_obj=y.shape[1], n_constr=0,
                           xl=np.zeros(x.shape[1]), xu=np.ones(x.shape[1]))
    ref_dirs = get_reference_directions("energy", y.shape[1], batch_size*n_tasks)
    algo = NSGA3(pop_size=raw_samples, ref_dirs=ref_dirs)
    
    algo.setup(problem, termination = NoTermination())
    pop = Population.new("X", x.cpu().numpy())
    pop.set("F", -y.cpu().numpy())
    algo.tell(infills=pop)
    new_pop = algo.ask()

	candidates = torch.tensor(new_pop.get("X"), **tkwargs)
		
	acq_value_list = [acq_func(candidates[i].unsqueeze(dim=0)).detach().item()
					  for i in range(candidates.shape[0])]
	sorted_x = candidates.cpu().numpy()[np.argsort(acq_value_list)]
	
	return torch.tensor(sorted_x[-batch_size:], **tkwargs) # take best BATCH_SIZE samples

Please provide any relevant code snippet if applicable.

No response

Code of Conduct

  • I agree to follow this Ax's Code of Conduct
@VMLC-PV VMLC-PV added the question Further information is requested label Dec 19, 2024
@CristianLara CristianLara self-assigned this Dec 19, 2024
@sdaulton
Copy link
Contributor

If all you need access to is the training data (Xs, Ys), you should be able to get this from Surrogate.training_data (

def training_data(self) -> list[SupervisedDataset]:
) and the surrogate is based to the Acquisition.__init__ ( ), so you could subclass Acquisition similar to SEBO and then store the training data on the Acquisition object, so that you can access it during optimize.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants