Skip to content

Commit

Permalink
SAL usage
Browse files Browse the repository at this point in the history
  • Loading branch information
ankicabarisic committed Aug 13, 2024
1 parent 43c27c6 commit aa72944
Showing 1 changed file with 108 additions and 17 deletions.
125 changes: 108 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,11 @@ Scheduling Abstraction Layer (SAL) is an abstraction layer initially developed a

## Table of Contents

- [Introduction](#introduction)
- [Installation](#installation)
- [Usage](#usage)
- [Endpoints](#endpoints)
- [Contributing](#contributing)
- [License](#license)
- [1. Introduction](#introduction)
- [2. Installation](#installation)
- [3. Usage](#usage)
- [4. Contributing](#contributing)
- [5. License](#license)s

## 1. Introduction

Expand All @@ -24,7 +23,7 @@ SAL is a project initially developed under the Morphemic project, part of the EU
SAL can be deployed in several ways: as a standalone microservice, within a Docker container, or as a Kubernetes pod. Below are the detailed instructions for each deployment method.

### 2.1. Deploying SAL as a Standalone Microservice
For this deployment approach SAL runs directly on the host system using a Java runtime environment.
For this deployment approach SAL runs directly on the host system using a Java runtime environment.
It is managed manually, meaning that you control the environment, dependencies, and configurations.
However, it is limited to the capabilities of the host system; scaling requires manual setup of additional instances.
Relies on the host’s network settings, with manual setup for external access and load balancing.
Expand Down Expand Up @@ -79,7 +78,7 @@ dependencies {
### 2.2. Deploying SAL as a Docker Container
In this deployment approach, SAL runs inside a Docker container, providing a consistent environment across different systems. Management is handled via Docker commands or Docker Compose, with containerization isolating the application and its dependencies. While SAL can scale across multiple containers on the same machine, scalability is limited to a single-node setup unless additional tools are utilized. Docker manages networking, though more complex configurations may require manual setup. This method is ideal for consistent deployment across various environments, easier distribution, and meeting basic scalability needs.

SAL can be deployed as a Docker container either by using a pre-built image or by building your own image.
SAL can be deployed as a Docker container either by using a pre-built image or by building your own.


#### 2.2.1. Using Pre-Built SAL Docker Images
Expand All @@ -106,7 +105,7 @@ git clone https://github.com/ow2-proactive/docker

2. Copy the built `.war` file:

Copy the `scheduling-abstraction-layer/sal-service/build/libs/scheduling-abstraction-layer-xxx.war` file generated in section 2.1.1 to the `docker/sal/artefacts` directory.
Copy the `.war` file generated in section 2.1.1 to the `docker/sal/artefacts` directory.

3. Build the Docker image:

Expand Down Expand Up @@ -181,7 +180,7 @@ docker-compose down

### 2.3. Deploying SAL as a Kubernetes Pod

In this deployment approach, SAL is deployed as a pod within a Kubernetes cluster, which offers advanced orchestration and management features.
In this deployment approach, SAL is deployed as a pod within a Kubernetes cluster, which offers advanced orchestration and management features.
Kubernetes automatically handles deployment, scaling, and operations across a cluster of nodes, providing native support for horizontal scaling, automatic load balancing, and self-healing capabilities. The robust networking solutions provided by Kubernetes include service discovery, Ingress controllers, and built-in load balancing. This method is ideal for large-scale, production environments where high availability, scalability, and complex orchestration are required.

To deploy SAL on Kubernetes, it is to use or create a Docker image as described in section 2.2. from remote Docker repository [DockerHub](https://hub.docker.com/r/activeeon/sal/tags). You can then deploy this image as a Kubernetes pod.
Expand All @@ -190,7 +189,7 @@ To deploy SAL on Kubernetes, it is to use or create a Docker image as described

1. Edit Kubernetes Deployment and Service Manifests:

Edit [sal.yaml](https://github.com/ow2-proactive/scheduling-abstraction-layer/blob/master/deployment/sal.yaml)
Edit [sal.yaml](https://github.com/ow2-proactive/scheduling-abstraction-layer/blob/master/deployment/sal.yaml)
Setup ProActive connection, SAL image and ports as described in 2.2.3. Step 1

NOTE: Update `/path/to/scripts` to the path where your scripts are located on the host machine.
Expand Down Expand Up @@ -227,20 +226,112 @@ To delete the deployment and service:
kubectl delete -f sal.yaml
```

## Usage
## 3. Usage
Once SAL is deployed, you can interact with it via its REST API, monitor its operation, and view logs to ensure everything is functioning correctly. Here’s how to use SAL effectively.

### 3.1. Accessing SAL REST Endpoints
SAL exposes several REST API endpoints which serves as interfaces that you can use to interact with the ProActive Scheduler & Resource Manager. For detailed information on each endpoint, please go [here](https://github.com/ow2-proactive/scheduling-abstraction-layer/blob/master/documentation/README.md).

To showcase usage [Connect](https://github.com/ow2-proactive/scheduling-abstraction-layer/blob/master/documentation/1-connection-endpoints.md#11--connect-endpoint) endpoint will use, which should have SAL protocol, host, port and ProActive username and password set as it was done in deployment `.yaml` files.

## Endpoints
Below are the instructions for connecting to and disconnecting from the ProActive server, using tools such as Postman or cURL.

[//]: #TODO (javadoc link to be added)
SAL provides multiple endpoints which serves as interfaces that you can use to interact with the ProActive Scheduler & Resource Manager. For detailed information on each endpoint, please go [here](https://github.com/ow2-proactive/scheduling-abstraction-layer/blob/master/documentation/README.md).
#### 3.1.1 Using Postman

## Contributing
Download and install [Postman](https://www.postman.com/) if you haven’t already.

* Set Up Request:
* URL: `http://localhost:8080/sal/pagateway/connect`
* Method: POST
* Headers: None
* Body:
```bash
{
"username": "Proactive server username",
"password": "Proactive server password"
}
```
* Send Request: Click the "Send" button to execute the request and review the response.
* Replay: A text format reply containing the session ID.

#### 3.1.1 Using cURL

* Open Terminal
* Execute cURL Command:
```bash
{
curl -X POST "http://localhost:8080/sal/pagateway/connect" \
--header "Content-Type: application/json" \
-d '{"username": "Proactive server username", "password": "Proactive server password"}'

}
```
* Replay: A text format reply containing the session ID.

### 3.2. Checking SAL logs

#### 3.2.1 View Logs for SAL Docker Container
When SAL is deployed as a Docker Container like in section 2.2, you can view its logs using Docker commands.

* Launch your command line interface (CLI).
* List Running Containers: To find the container name or ID, use:
```bash
docker ps
```
Look for the container name `myComposeSAL` or the name you used.
* View SAL general Logs:
```bash
docker logs myComposeSAL
```
* View SAL database general Logs:
```bash
docker logs myComposeMariaDB
```

* View detail SAL Logs inside container:
```bash
docker exec -it myComposeSAL /bin/bash #To enter the SAL container’s shell
cd logs
cat scheduling-abstraction-layer.log #View detail logs
```
* Query SAL Database i.e. MariaDB:
```bash
docker exec -it myComposeMariaDB /bin/bash
# Replace <password> with the password you've set for the MariaDB root user
mariadb -uroot -p<password> proactive

```


#### 3.2.1 View Logs for SAL Pod
When SAL is deployed as a Kubernetes pod, you can access the logs using `kubectl` commands.
* Get the name of the SAL pod
```bash
kubectl get po -o wide
```
* View SAL general Logs:
```bash
# Replace <namespace> with the appropriate namespace and <pod-name> with the actual pod name obtained in previous step
kubectl -n <namespace> logs <pod-name> sal
```
* View detail SAL Logs inside container:
```bash
kubectl exec -it <pod-name> -c sal -- /bin/bash #To enter the SAL container’s shell
cd logs
cat scheduling-abstraction-layer.log
```

* Query SAL Database i.e. MariaDB:
```bash
kubectl exec -it <pod-name> -c mariadb -- mariadb -uroot -p<password> proactive
#Replace <password> with the appropriate MariaDB root password.
```

## 4. Contributing

Contributions to SAL are welcome! If you have any bug fixes, improvements, or new features to propose, please feel free to open a pull request. For major changes, it is recommended to discuss your ideas with the maintainers first.

## License
## 5. License

Scheduling Abstraction Layer (SAL) is distributed under the [MIT License](https://github.com/ow2-proactive/scheduling-abstraction-layer/blob/master/LICENSE). Please see the [LICENSE](https://github.com/ow2-proactive/scheduling-abstraction-layer/blob/master/LICENSE) file for more information.
Note that to use SAL it is necessary to have licence for [ProActive Scheduler & Resource Manager](https://proactive.activeeon.com/).
Expand Down

0 comments on commit aa72944

Please sign in to comment.