-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added a simple particle swarm based hyper-heuristic #125
base: feature-hho
Are you sure you want to change the base?
Added a simple particle swarm based hyper-heuristic #125
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the submission, @Alan-Robertson! It's looking like a really good feature so far, so thank you again. I've left some comments on some individual issues, but I think it's looking really good. In combination with some tests and some documentation, I think this will be awesome to have included. ♥
@@ -0,0 +1,202 @@ | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should make sure the copyright header and -*- coding
lines are kept on this file as well.
## CLASSES #################################################################### | ||
|
||
__all__ = [ | ||
'ParticleSwarmOptimiser' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The performance objective class should also be listed under __all__
.
'ParticleSwarmOptimiser' | ||
] | ||
|
||
class HyperHeuristicOptimiser(object): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the level of generality that this class is written, including HyperHeuristic
in the name may be slightly misleading? I think it makes a lot of sense to keep the default choice of function being the performance objective below, but to rename here as a nod to that generality. Perhaps even just Optimiser
(or even Optimizer
, since the project was started in American English instead of something a bit more common)...
self._funct_kwargs = funct_kwargs | ||
|
||
if fitness_function is None: # Default to calling perf test multiple | ||
self._optimisable = PerfTestMultipleAbstractor( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should this be _optimisable
or _fitness_function
? I don't see any other references to _optimisable
?
def fitness_function(self, params): | ||
return self._fitness_function(params) | ||
|
||
def parrallel(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's only one "r" in parallel?
omega_v=0.35, | ||
phi_p=0.25, | ||
phi_g=0.5, | ||
serial_map=map |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name serial_map
is a bit misleading here, since it's either serial or parallel depending?
self._fitness[itr - 1]["velocities"]) | ||
|
||
# Apply the boundary conditions if any exist | ||
if self._boundary_map is not None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The behavior of _boundary_map
seems to be similar to Model.canonicalize
rather than Models.are_models_valid
? If so, that's quite reasonable, but should be clearly indicated in the documentation since it's an unusual way to specify boundaries.
fitness = serial_map(self.fitness_function, particles) | ||
return fitness | ||
|
||
def update_positions(self, positions, velocities): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These methods are public and should thus have at least some brief docstrings. They probably don't need to be very detailed, but there should be something.
*args, | ||
**kwargs): | ||
self._heuristic = kwargs['heuristic_class'] | ||
del kwargs['heuristic_class'] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This can cause an exception if heuristic_class
isn't defined. That's clearly an error, since this class doesn't make any sense without a heuristic class, but the exception that gets raised isn't immediately obvious. It might make sense to promote heuristic_class
to being an explicitly named kwarg so that exceptions raised are more readable.
global_attractor = local_attractors[np.argmin(local_attractors["fitness"])] | ||
return local_attractors, global_attractor | ||
|
||
class PerfTestMultipleAbstractor: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think the term Abstractor
here doesn't quite communicate what the class is doing. I'd suggest something like PerformanceObjective
, perhaps? Also, this should inherit from object
so that it's forced to be a new-style class even on Python 2.
It seems that unrelated to the PR itself, there's build failures that are caused by Travis not being properly configured for QuTiP 4.1 (see discussion at #126). I'll work on getting that diagnosed, then, so we can include this PR ASAP. |
When you get a second, can you pull master into this PR to include #127? That should help fix build errors. Thanks! |
…-hyper-heuristic-optimisation
It looks like pulling in master helped fix the issues with the build, thanks again for that! |
I think some documentation on use cases for the hyper-heuristic optimiser would be nice. From this commit alone I am having trouble piecing it together. I assume this is related to the paper recently listed on the Arxiv by Alan, and Chris but I have yet to find time to read it. |
I agree entirely, the plan is to add documentation to both the guide and apiref, including descriptions of a few explicit usecases. At the moment, we wanted to start the PR as-is to get initial feedback on the design, since ideally the particle swarm optimizer will be useful outside the immediate usecases that @Alan-Robertson described in his initial comment. |
…-hyper-heuristic-optimisation
…r travis compatibility
First hyper-heuristic optimisation algorithm to be added for this pull request, an example follows:
First we define a test function
Next we initialise the hyper-heuristic optimiser and pass it the relevant parameters over which to optimise before calling it to perform the optimisation
Similarly this is parrallelisable given a map function