# Job Manager is not responding

1. Check logs

```bash
[2022-05-17T14:29:12.809] error: Unable to resolve "NODE_HOSTNAME_1": Unknown host
[2022-05-17T14:29:12.809] error: _set_slurmd_addr: failure on NODE_HOSTNAME_1
[2022-05-17T14:29:13.591] error: Unable to resolve "NODE_HOSTNAME_2": Unknown host
[2022-05-17T14:29:13.591] error: _set_slurmd_addr: failure on NODE_HOSTNAME_2
```

2\. Check `/etc/hosts` to see if logs host name is in there

```bash
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.10.0.1 graphical_node
172.10.0.2 NODE_HOSTNAME_3
172.10.0.3 NODE_HOSTNAME_4
172.10.0.4 NODE_HOSTNAME_5
```

{% hint style="info" %}
Note that the nodes cannot be seen in the file. This can be due to the Azure VMSS nodes restarting and not having the same name if it's hosted on an Azure VMSS
{% endhint %}

3\. Check nodes with:

```bash
sinfo
```

* Should return:

```bash
PARTITION AVAIL  TIMELIMIT  NODES  STATE NODELIST
debug*       up   infinite      5  idle~ NODE_HOSTNAMES[1]
debug*       up   infinite     10  alloc NODE_HOSTNAMES[2]
```

4\. Check if the `slurmctld` is up and running

```bash
sudo systemctl status slurmctld
```

5\. If the service is not running properly restart it or start/stop it

```bash
sudo systemctl restart slurmctld

# OR

sudo systemctl stop slurmctld
sudo systemctl start slurmctld
```

6\. Check if service is running and the servers are up and processing jobs

```bash
# Check service:
sudo systemctl status slurmctld

# Check nodes:
sinfo

# Check jobs 
squeue
```
