update readme

This commit is contained in:
root
2024-07-12 06:12:35 +08:00
parent fb60d694ba
commit 58c56ab2ab
2 changed files with 76 additions and 108 deletions

122
README.md
View File

@@ -22,94 +22,66 @@
## Introduction ## Introduction
TensorNEAT is a JAX-based libaray for NeuroEvolution of Augmenting Topologies (NEAT) algorithms, focused on harnessing GPU acceleration to enhance the efficiency of evolving neural network structures for complex tasks. Its core mechanism involves the tensorization of network topologies, enabling parallel processing and significantly boosting computational speed and scalability by leveraging modern hardware accelerators. TensorNEAT is compatible with the [EvoX](https://github.com/EMI-Group/evox/) framewrok. TensorNEAT is a JAX-based libaray for NeuroEvolution of Augmenting Topologies (NEAT) algorithms, focused on harnessing GPU acceleration to enhance the efficiency of evolving neural network structures for complex tasks. Its core mechanism involves the tensorization of network topologies, enabling parallel processing and significantly boosting computational speed and scalability by leveraging modern hardware accelerators. TensorNEAT is compatible with the [EvoX](https://github.com/EMI-Group/evox/) framewrok.
## Requirements ## Key Features
Due to the rapid iteration of JAX versions, configuring the runtime environment for TensorNEAT can be challenging. We recommend the following versions for the relevant libraries: - JAX-based network for neuroevolution:
- **Batch inference** across networks with different architectures, GPU-accelerated.
- Evolve networks with **irregular structures** and **fully customize** their behavior.
- Visualize the network and represent it in **mathematical formulas**.
- jax (0.4.28) - GPU-accelerated NEAT implementation:
- jaxlib (0.4.28+cuda12.cudnn89) - Run NEAT and HyperNEAT on GPUs.
- brax (0.10.3) - Achieve **500x** speedup compared to CPU-based NEAT libraries.
- gymnax (0.0.8)
We provide detailed JAX-related environment references in [recommend_environment](recommend_environment.txt). If you encounter any issues while configuring the environment yourself, you can use this as a reference. - Rich in extended content:
- Compatible with **EvoX** for multi-device and distributed support.
- Test neuroevolution algorithms on advanced **RL tasks** (Brax, Gymnax).
## Example ## Basic API Usage
Simple Example for XOR problem: Start your journey with TensorNEAT in a few simple steps:
1. **Import necessary modules**:
```python ```python
from pipeline import Pipeline from tensorneat.pipeline import Pipeline
from algorithm.neat import * from tensorneat import algorithm, genome, problem, common
```
from problem.func_fit import XOR3d 2. **Configure the NEAT algorithm and define a problem**:
```python
if __name__ == '__main__': algorithm = algorithm.NEAT(
pipeline = Pipeline( pop_size=10000,
algorithm=NEAT( species_size=20,
species=DefaultSpecies( survival_threshold=0.01,
genome=DefaultGenome( genome=genome.DefaultGenome(
num_inputs=3, num_inputs=3,
num_outputs=1, num_outputs=1,
max_nodes=50, output_transform=common.ACT.sigmoid,
max_conns=100,
), ),
pop_size=10000, )
species_size=10, problem = problem.XOR3d()
compatibility_threshold=3.5,
),
),
problem=XOR3d(),
generation_limit=10000,
fitness_target=-1e-8
)
# initialize state
state = pipeline.setup()
# print(state)
# run until terminate
state, best = pipeline.auto_run(state)
# show result
pipeline.show(state, best)
``` ```
Simple Example for RL envs in Brax (Ant): 3. **Initialize the pipeline and run**:
```python ```python
from pipeline import Pipeline pipeline = Pipeline(
from algorithm.neat import * algorithm,
problem,
from problem.rl_env import BraxEnv generation_limit=200,
from tensorneat.utils import ACT fitness_target=-1e-6,
seed=42,
if __name__ == '__main__': )
pipeline = Pipeline( state = pipeline.setup()
algorithm=NEAT( # run until termination
species=DefaultSpecies( state, best = pipeline.auto_run(state)
genome=DefaultGenome( # show results
num_inputs=27, pipeline.show(state, best)
num_outputs=8, ```
max_nodes=50,
max_conns=100, ## Installation
node_gene=DefaultNodeGene( Install `tensorneat` from the GitHub source code:
activation_options=(ACT.tanh,), ```
activation_default=ACT.tanh, pip install git+https://github.com/EMI-Group/tensorneat.git
)
),
pop_size=1000,
species_size=10,
),
),
problem=BraxEnv(
env_name='ant',
),
generation_limit=10000,
fitness_target=5000
)
# initialize state
state = pipeline.setup()
# print(state)
# run until terminate
state, best = pipeline.auto_run(state)
``` ```
more examples are in `tensorneat/examples`.
## Community & Support ## Community & Support

View File

@@ -1,31 +1,27 @@
from tensorneat.pipeline import Pipeline from tensorneat.pipeline import Pipeline
from tensorneat.algorithm.neat import NEAT from tensorneat import algorithm, genome, problem, common
from tensorneat.genome import DefaultGenome
from tensorneat.problem.func_fit import XOR3d
from tensorneat.common import ACT
if __name__ == "__main__": algorithm = algorithm.NEAT(
pipeline = Pipeline(
algorithm=NEAT(
pop_size=10000, pop_size=10000,
species_size=20, species_size=20,
survival_threshold=0.01, survival_threshold=0.01,
genome=DefaultGenome( genome=genome.DefaultGenome(
num_inputs=3, num_inputs=3,
num_outputs=1, num_outputs=1,
init_hidden_layers=(), output_transform=common.ACT.sigmoid,
output_transform=ACT.sigmoid,
), ),
), )
problem=XOR3d(), problem = problem.XOR3d()
generation_limit=500,
fitness_target=-1e-6, # float32 precision
seed=42,
)
# initialize state pipeline = Pipeline(
state = pipeline.setup() algorithm,
# run until terminate problem,
state, best = pipeline.auto_run(state) generation_limit=200,
# show result fitness_target=-1e-6,
pipeline.show(state, best) seed=42,
)
state = pipeline.setup()
# run until terminate
state, best = pipeline.auto_run(state)
# show result
pipeline.show(state, best)