Skip to content

Commit

Permalink
Bumped python and extensions package versions.
Browse files Browse the repository at this point in the history
  • Loading branch information
miguelalonsojr committed Oct 4, 2024
1 parent 9ac55b2 commit e9e3601
Show file tree
Hide file tree
Showing 26 changed files with 70 additions and 68 deletions.
4 changes: 2 additions & 2 deletions colab/Colab_UnityEnvironment_1_Run.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
},
"source": [
"# ML-Agents Open a UnityEnvironment\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -149,7 +149,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 1,
Expand Down
6 changes: 3 additions & 3 deletions colab/Colab_UnityEnvironment_2_Train.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
},
"source": [
"# ML-Agents Q-Learning with GridWorld\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/gridworld.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -152,7 +152,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 2,
Expand Down Expand Up @@ -190,7 +190,7 @@
"id": "pZhVRfdoyPmv"
},
"source": [
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"The [GridWorld](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Examples.md#gridworld) Environment is a simple Unity visual environment. The Agent is a blue square in a 3x3 grid that is trying to reach a green __`+`__ while avoiding a red __`x`__.\n",
"\n",
"The observation is an image obtained by a camera on top of the grid.\n",
"\n",
Expand Down
12 changes: 6 additions & 6 deletions colab/Colab_UnityEnvironment_3_SideChannel.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@
},
"source": [
"# ML-Agents Use SideChannels\n",
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_21_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://raw.githubusercontent.com/Unity-Technologies/ml-agents/release_22_docs/docs/images/3dball_big.png\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -153,7 +153,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
],
"execution_count": 2,
Expand All @@ -176,7 +176,7 @@
"## Side Channel\n",
"\n",
"SideChannels are objects that can be passed to the constructor of a UnityEnvironment or the `make()` method of a registry entry to send non Reinforcement Learning related data.\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"More information available [here](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#communicating-additional-information-with-the-environment)\n",
"\n",
"\n",
"\n"
Expand All @@ -189,7 +189,7 @@
},
"source": [
"### Engine Configuration SideChannel\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"The [Engine Configuration Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#engineconfigurationchannel) is used to configure how the Unity Engine should run.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},
Expand Down Expand Up @@ -282,7 +282,7 @@
},
"source": [
"### Environment Parameters Channel\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"The [Environment Parameters Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Python-API.md#environmentparameters) is used to modify environment parameters during the simulation.\n",
"We will use the GridWorld environment to demonstrate how to use the EngineConfigurationChannel."
]
},
Expand Down Expand Up @@ -419,7 +419,7 @@
},
"source": [
"### Creating your own Side Channels\n",
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
"You can send various kinds of data between a Unity Environment and Python but you will need to [create your own implementation of a Side Channel](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Custom-SideChannels.md#custom-side-channels) for advanced use cases.\n"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions colab/Colab_UnityEnvironment_4_SB3VectorEnv.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
},
"source": [
"# ML-Agents run with Stable Baselines 3\n",
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
"<img src=\"https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/image-banner.png?raw=true\" align=\"middle\" width=\"435\"/>"
]
},
{
Expand Down Expand Up @@ -127,7 +127,7 @@
" import mlagents\n",
" print(\"ml-agents already installed\")\n",
"except ImportError:\n",
" !python -m pip install -q mlagents==1.0.0\n",
" !python -m pip install -q mlagents==1.1.0\n",
" print(\"Installed ml-agents\")"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,24 +28,24 @@ The ML-Agents Extensions package is not currently available in the Package Manag
recommended ways to install the package:

### Local Installation
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/Installation.md#advanced-local-installation-for-development-1)
[Clone the repository](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/Installation.md#clone-the-ml-agents-toolkit-repository-optional) and follow the
[Local Installation for Development](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/Installation.md#advanced-local-installation-for-development-1)
directions (substituting `com.unity.ml-agents.extensions` for the package name).

### Github via Package Manager
In Unity 2019.4 or later, open the Package Manager, hit the "+" button, and select "Add package from git URL".

![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/images/unity_package_manager_git_url.png)
![Package Manager git URL](https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/images/unity_package_manager_git_url.png)

In the dialog that appears, enter
```
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_21
git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_22
```

You can also edit your project's `manifest.json` directly and add the following line to the `dependencies`
section:
```
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_21",
"com.unity.ml-agents.extensions": "git+https://github.com/Unity-Technologies/ml-agents.git?path=com.unity.ml-agents.extensions#release_22",
```
See [Git dependencies](https://docs.unity3d.com/Manual/upm-git.html#subfolder) for more information. Note that this
may take several minutes to resolve the packages the first time that you add it.
Expand All @@ -67,4 +67,4 @@ If using the `InputActuatorComponent`
- No way to customize the action space of the `InputActuatorComponent`

## Need Help?
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/README.md) contains links for contacting the team or getting support.
The main [README](https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/README.md) contains links for contacting the team or getting support.
4 changes: 2 additions & 2 deletions com.unity.ml-agents.extensions/package.json
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
{
"name": "com.unity.ml-agents.extensions",
"displayName": "ML Agents Extensions",
"version": "0.6.1-preview",
"version": "0.6.1-exp.1",
"unity": "2023.2",
"description": "A source-only package for new features based on ML-Agents",
"dependencies": {
"com.unity.ml-agents": "3.0.0-exp.1",
"com.unity.ml-agents": "3.0.0",
"com.unity.modules.physics": "1.0.0"
}
}
4 changes: 2 additions & 2 deletions com.unity.ml-agents/Runtime/Academy.cs
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@
* API. For more information on each of these entities, in addition to how to
* set-up a learning environment and train the behavior of characters in a
* Unity scene, please browse our documentation pages on GitHub:
* https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/docs/
* https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/docs/
*/

namespace Unity.MLAgents
Expand Down Expand Up @@ -61,7 +61,7 @@ void FixedUpdate()
/// fall back to inference or heuristic decisions. (You can also set agents to always use
/// inference or heuristics.)
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_21_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/tree/release_22_docs/" +
"docs/Learning-Environment-Design.md")]
public class Academy : IDisposable
{
Expand Down
2 changes: 1 addition & 1 deletion com.unity.ml-agents/Runtime/Actuators/IActionReceiver.cs
Original file line number Diff line number Diff line change
Expand Up @@ -184,7 +184,7 @@ public interface IActionReceiver
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
void WriteDiscreteActionMask(IDiscreteActionMask actionMask);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ public interface IDiscreteActionMask
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#masking-discrete-actions
/// </remarks>
/// <param name="branch">The branch for which the actions will be masked.</param>
/// <param name="actionIndex">Index of the action.</param>
Expand Down
26 changes: 13 additions & 13 deletions com.unity.ml-agents/Runtime/Agent.cs
Original file line number Diff line number Diff line change
Expand Up @@ -192,13 +192,13 @@ public override BuiltInActuatorType GetBuiltInActuatorType()
/// [OnDisable()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnDisable.html]
/// [OnBeforeSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnBeforeSerialize.html
/// [OnAfterSerialize()]: https://docs.unity3d.com/ScriptReference/MonoBehaviour.OnAfterSerialize.html
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design.md
/// [Agents]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md
/// [Reinforcement Learning in Unity]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design.md
/// [Unity ML-Agents Toolkit]: https://github.com/Unity-Technologies/ml-agents
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Readme.md
/// [Unity ML-Agents Toolkit manual]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Readme.md
///
/// </remarks>
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/" +
[HelpURL("https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/" +
"docs/Learning-Environment-Design-Agents.md")]
[Serializable]
[RequireComponent(typeof(BehaviorParameters))]
Expand Down Expand Up @@ -728,8 +728,8 @@ public int CompletedEpisodes
/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// </remarks>
/// <param name="reward">The new value of the reward.</param>
public void SetReward(float reward)
Expand All @@ -756,8 +756,8 @@ public void SetReward(float reward)
/// for information about mixing reward signals from curiosity and Generative Adversarial
/// Imitation Learning (GAIL) with rewards supplied through this method.
///
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
/// [Agents - Rewards]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#rewards
/// [Reward Signals]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/ML-Agents-Overview.md#a-quick-note-on-reward-signals
///</remarks>
/// <param name="increment">Incremental reward value.</param>
public void AddReward(float increment)
Expand Down Expand Up @@ -945,8 +945,8 @@ public virtual void Initialize() { }
/// implementing a simple heuristic function can aid in debugging agent actions and interactions
/// with its environment.
///
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Demonstration Recorder]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// </remarks>
/// <example>
Expand Down Expand Up @@ -1203,7 +1203,7 @@ void ResetSensors()
/// For more information about observations, see [Observations and Sensors].
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// [Observations and Sensors]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#observations-and-sensors
/// </remarks>
public virtual void CollectObservations(VectorSensor sensor)
{
Expand Down Expand Up @@ -1245,7 +1245,7 @@ public ReadOnlyCollection<float> GetStackedObservations()
///
/// See [Agents - Actions] for more information on masking actions.
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </remarks>
/// <seealso cref="IActionReceiver.OnActionReceived"/>
public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
Expand Down Expand Up @@ -1312,7 +1312,7 @@ public virtual void WriteDiscreteActionMask(IDiscreteActionMask actionMask) { }
///
/// For more information about implementing agent actions see [Agents - Actions].
///
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs/Learning-Environment-Design-Agents.md#actions
/// [Agents - Actions]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs/Learning-Environment-Design-Agents.md#actions
/// </para>
/// </remarks>
/// <param name="actions">
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ namespace Unity.MLAgents.Demonstrations
/// See [Imitation Learning - Recording Demonstrations] for more information.
///
/// [GameObject]: https://docs.unity3d.com/Manual/GameObjects.html
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_21_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// [Imitation Learning - Recording Demonstrations]: https://github.com/Unity-Technologies/ml-agents/blob/release_22_docs/docs//Learning-Environment-Design-Agents.md#recording-demonstrations
/// </remarks>
[RequireComponent(typeof(Agent))]
[AddComponentMenu("ML Agents/Demonstration Recorder", (int)MenuGroup.Default)]
Expand Down
4 changes: 2 additions & 2 deletions com.unity.ml-agents/package.json
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@
},
"samples": [
{
"displayName":"3D Ball",
"description":"The 3D Ball sample is a simple environment that is a great for jumping into ML-Agents to see how things work.",
"displayName": "3D Ball",
"description": "The 3D Ball sample is a simple environment that is a great for jumping into ML-Agents to see how things work.",
"path": "Samples~/3DBall"
}
]
Expand Down
Loading

0 comments on commit e9e3601

Please sign in to comment.