Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vm_disk does not change tiering_priority on first run #30

Open
justinc1 opened this issue Oct 4, 2022 · 4 comments
Open

vm_disk does not change tiering_priority on first run #30

justinc1 opened this issue Oct 4, 2022 · 4 comments
Assignees
Labels
waiting_api waiting on Scale API change

Comments

@justinc1
Copy link
Collaborator

justinc1 commented Oct 4, 2022

My console output

(.venv) justin_cinkelj@jcpc:~/devel/scale-ansible-collection/ansible_collections/scale_computing/hypercore$ ansible-playbook -i localhost, examples/dd_a.yml -v
Using /home/justin_cinkelj/devel/scale-ansible-collection/ansible_collections/scale_computing/hypercore/ansible.cfg as config file
[WARNING]: running playbook inside collection scale_computing.hypercore

PLAY [Example iso_info module] *********************************************************************************************************************************

TASK [Clone vm security - if not present] **********************************************************************************************************************
changed: [localhost] => changed=true 
  ansible_facts:
    discovered_interpreter_python: /usr/bin/python3
  msg: Virtual machine - ubuntu20_04 - cloning complete to - security-xlab-test.

TASK [Security Vm disk desired configuration] ******************************************************************************************************************
changed: [localhost] => changed=true 
  record:
  - cache_mode: none
    disable_snapshotting: false
    disk_slot: 0
    iso_name: ''
    mount_points: []
    read_only: false
    size: 0
    tiering_priority_factor: 0
    type: ide_cdrom
    uuid: 51bb4342-a963-429b-889c-d708304ca43d
    vm_uuid: a9a5dbbc-d96b-48ff-986a-3aaea22e4e42
  - cache_mode: none
    disable_snapshotting: false
    disk_slot: 1
    iso_name: cloud-init-a9a5dbbc.iso
    mount_points: []
    read_only: false
    size: 1048576
    tiering_priority_factor: 0
    type: ide_cdrom
    uuid: 76804ec8-4346-4435-a55e-7559627acbe5
    vm_uuid: a9a5dbbc-d96b-48ff-986a-3aaea22e4e42
  - cache_mode: none
    disable_snapshotting: false
    disk_slot: 0
    iso_name: ''
    mount_points: []
    read_only: false
    size: 53687091200
    tiering_priority_factor: 4
    type: virtio_disk
    uuid: 01c49aa4-d303-440c-9f07-929751a484fb
    vm_uuid: a9a5dbbc-d96b-48ff-986a-3aaea22e4e42
  - cache_mode: none
    disable_snapshotting: false
    disk_slot: 1
    iso_name: ''
    mount_points: []
    read_only: false
    size: 107374182400
    tiering_priority_factor: 1
    type: virtio_disk
    uuid: 2ab0308c-7818-42d6-9c7a-b8f8fe2fc3f8
    vm_uuid: a9a5dbbc-d96b-48ff-986a-3aaea22e4e42
  vm_rebooted: false

TASK [Security Vm desired configuration and state] *************************************************************************************************************
changed: [localhost] => changed=true 
  vm_rebooted: false

PLAY RECAP *****************************************************************************************************************************************************
localhost                  : ok=3    changed=3    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

(.venv) justin_cinkelj@jcpc:~/devel/scale-ansible-collection/ansible_collections/scale_computing/hypercore$ 
(.venv) justin_cinkelj@jcpc:~/devel/scale-ansible-collection/ansible_collections/scale_computing/hypercore$ 
(.venv) justin_cinkelj@jcpc:~/devel/scale-ansible-collection/ansible_collections/scale_computing/hypercore$ 
(.venv) justin_cinkelj@jcpc:~/devel/scale-ansible-collection/ansible_collections/scale_computing/hypercore$ ansible-playbook -i localhost, examples/dd_a.yml -v
Using /home/justin_cinkelj/devel/scale-ansible-collection/ansible_collections/scale_computing/hypercore/ansible.cfg as config file
[WARNING]: running playbook inside collection scale_computing.hypercore

PLAY [Example iso_info module] *********************************************************************************************************************************

TASK [Clone vm security - if not present] **********************************************************************************************************************
ok: [localhost] => changed=false 
  ansible_facts:
    discovered_interpreter_python: /usr/bin/python3
  msg: Virtual machine security-xlab-test already exists.

TASK [Security Vm disk desired configuration] ******************************************************************************************************************
changed: [localhost] => changed=true 
  record:
  - cache_mode: none
    disable_snapshotting: false
    disk_slot: 0
    iso_name: ''
    mount_points: []
    read_only: false
    size: 0
    tiering_priority_factor: 0
    type: ide_cdrom
    uuid: 51bb4342-a963-429b-889c-d708304ca43d
    vm_uuid: a9a5dbbc-d96b-48ff-986a-3aaea22e4e42
  - cache_mode: none
    disable_snapshotting: false
    disk_slot: 1
    iso_name: cloud-init-a9a5dbbc.iso
    mount_points: []
    read_only: false
    size: 1048576
    tiering_priority_factor: 0
    type: ide_cdrom
    uuid: 76804ec8-4346-4435-a55e-7559627acbe5
    vm_uuid: a9a5dbbc-d96b-48ff-986a-3aaea22e4e42
  - cache_mode: none
    disable_snapshotting: false
    disk_slot: 0
    iso_name: ''
    mount_points: []
    read_only: false
    size: 53687091200
    tiering_priority_factor: 4
    type: virtio_disk
    uuid: 01c49aa4-d303-440c-9f07-929751a484fb
    vm_uuid: a9a5dbbc-d96b-48ff-986a-3aaea22e4e42
  - cache_mode: none
    disable_snapshotting: false
    disk_slot: 1
    iso_name: ''
    mount_points: []
    read_only: false
    size: 107374182400
    tiering_priority_factor: 1
    type: virtio_disk
    uuid: 2ab0308c-7818-42d6-9c7a-b8f8fe2fc3f8
    vm_uuid: a9a5dbbc-d96b-48ff-986a-3aaea22e4e42
  vm_rebooted: false

TASK [Security Vm desired configuration and state] *************************************************************************************************************
ok: [localhost] => changed=false 
  vm_rebooted: false

PLAY RECAP *****************************************************************************************************************************************************
localhost                  : ok=3    changed=1    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   

The playbook examples/dd_a.yml:

---
- name: Example module
  hosts: localhost
  connection: local
  gather_facts: false
  environment:
    MY_VAR: my_value
    # - SC_HOST: https://1.2.3.4
    # - SC_USERNAME: admin
    # - SC_PASSWORD: todo
  vars:
    site_name: xlab-test

  tasks:
  # ------------------------------------------------------
  # begin security vm configurations
    - name: Clone vm security - if not present
      scale_computing.hypercore.vm_clone:
        vm_name: security-{{ site_name }}
        tags:
          - xlab-demo
          - ansible
          - cloudinit
        source_vm_name: ubuntu20_04
        cloud_init:
          user_data: |
            #cloud-config
            password: "password"
            chpasswd: { expire: False }
            ssh_pwauth: True
            apt: {sources: {docker.list: {source: 'deb [arch=amd64] https://download.docker.com/linux/ubuntu $RELEASE stable', keyid: 9DC858229FC7DD38854AE2D88D81803C0EBFCD88}}}
            packages: [qemu-guest-agent, docker-ce, docker-ce-cli, docker-compose, unzip]
            bootcmd:
              - [ sh, -c, 'sudo echo GRUB_CMDLINE_LINUX="nomodeset" >> /etc/default/grub' ]
              - [ sh, -c, 'sudo echo GRUB_GFXPAYLOAD_LINUX="1024x768" >> /etc/default/grub' ]
              - [ sh, -c, 'sudo echo GRUB_DISABLE_LINUX_UUID=true >> /etc/default/grub' ]
              - [ sh, -c, 'sudo update-grub' ]
            runcmd:
              - [ systemctl, restart, --no-block, qemu-guest-agent ]
              - [ curl -s https://api.sc-platform.sc-platform.avassa.net/install | sudo sh -s -- -y -c  ]
            write_files:
            # configure docker daemon to be accessible remotely via TCP on socket 2375
            - content: |
                [Service]
                ExecStart=
                ExecStart=/usr/bin/dockerd -H unix:// -H tcp://0.0.0.0:2375
              path: /etc/systemd/system/docker.service.d/options.conf
          meta_data: |
            dsmode: local
            local-hostname: "security-{{ site_name }}"
      register: security
    #   notify:
    #     - pharmacy-created

    # - name: Flush handlers  #notifies handlers right away instead of at end of playbook
    #   meta: flush_handlers

    - name: Security Vm disk desired configuration
      scale_computing.hypercore.vm_disk:
        vm_name: security-{{ site_name }}
        items:
          - disk_slot: 0
            type: virtio_disk
            size: "{{ '50 GB' | human_to_bytes }}"
            tiering_priority_factor: 4
          - disk_slot: 1
            type: virtio_disk
            size: "{{ '100 GB' | human_to_bytes }}"
            tiering_priority_factor: 1
        state: present

    - name: Security Vm desired configuration and state
      scale_computing.hypercore.vm_params:
        vm_name: security-{{ site_name }}
        memory: "{{ '1 GB' | human_to_bytes }}"
        description: security server for {{ site_name }}
        tags:
          - xlab-demo
          - ansible
          - security
          - "{{ site_name }}"
        vcpu: 2
        power_state: start

On 3rd run it task "Security Vm disk desired configuration" does report changed=false as expected.

And original comment from Dave:

[12:33 PM](https://scalecomputing.slack.com/archives/C03NDHAJWEA/p1664793213941949)
issue above ^ has something to do with having the second disk … if I remove it everything is changed in one pass
  [12:49 PM](https://scalecomputing.slack.com/archives/C03NDHAJWEA/p1664794142801699)
well - seems to be issue only if I have second disk AND setting tiering_priority_factor which I know we are doing some work on… maybe add a test setting that on a second disk?
@justinc1 justinc1 assigned justinc1 and ghost and unassigned justinc1 Oct 4, 2022
@ddemlow
Copy link
Member

ddemlow commented Oct 27, 2022

further testing has shown that tiering priority on SECOND disk does in fact require 2 passes to actually change (confirmed looking at UI value on second disk - waited several minutes)

  • name: Security Vm disk desired configuration
    scale_computing.hypercore.vm_disk:
    vm_name: "securityCONTRACTOR-{{ site_name }}"
    items:
    - disk_slot: 0
    type: virtio_disk
    size: "{{ '50 GB' | human_to_bytes }}" #50GB | human to bytes results in 53.7GB VSD in Hypercore
    tiering_priority_factor: 3
    - disk_slot: 1
    type: virtio_disk
    size: "{{ 200 * 1000 * 1000 * 1000 }}" # This calculation results in 200GB VSD in Hypercore
    tiering_priority_factor: 1 # as of 10/26 - this is taking 2 passes to set
    state: present

@ddemlow ddemlow added bug Something isn't working todo Selected for development labels Oct 27, 2022
@justinc1 justinc1 changed the title vm_disk reports changed=true when no change was made vm_disk does not change tiering_priority on first run Oct 27, 2022
@domendobnikar domendobnikar self-assigned this Nov 14, 2022
@domendobnikar
Copy link
Collaborator

domendobnikar commented Nov 14, 2022

@ddemlow @justinc1
I have done extensive testing regarding this issue today. It seems like the problem is coming from API backend.
When a disk is being created the tiering priority is automatically set to 4 (8 backend), and ignores the actual tiering priority being sent in a create request. This happens during disk creation only, that's why the module is not idempotent. From what I've been able to test - we are sending correct values to the API.
This is the playbook I used to test:

- name: Create test VM and change tiering priority.
  hosts: localhost
  tasks:
  - name: Create XLAB-test-tiering-prio-VM-UI.
    scale_computing.hypercore.vm:
      cluster_instance:
        host: ***********
        username: ***********
        password: ***********
      state: present
      tags:
        - Xlab
      memory: "{{ '2048 MB' | human_to_bytes }}"
      vcpu: 2
      power_state: stop
      vm_name: XLAB-test-tiering-prio-VM-UI
      disks: []
      nics: []
    register: testout

  - name: Change tiering prio on XLAB-test-tiering-prio-VM-UI.
    scale_computing.hypercore.vm_disk:
      cluster_instance:
        host: ***********
        username: ***********
        password: ***********
      state: set
      vm_name: XLAB-test-tiering-prio-VM-UI
      items:
        - disk_slot: 0
          tiering_priority_factor: 1
          type: virtio_disk
          size: "{{ '100 GB' | human_to_bytes }}"
        - disk_slot: 0
          type: ide_cdrom
          iso_name: TinyCore-vm.iso
        - disk_slot: 1
          tiering_priority_factor: 1
          type: ide_disk
          size: "{{ '10.1 GB' | human_to_bytes }}"
    register: testout
  - name: Show output
    debug:
      var: testout

  - name: Wait N sec - tieringPriorityFactor should change
    ansible.builtin.pause:
      seconds: 30

  - name: Change tiering prio on XLAB-test-tiering-prio-VM-UI. (SECOND TIME)
    scale_computing.hypercore.vm_disk:
      cluster_instance:
        host: ***********
        username: ***********
        password: ***********
      state: set
      vm_name: XLAB-test-tiering-prio-VM-UI
      items:
        - disk_slot: 0
          tiering_priority_factor: 1
          type: virtio_disk
          size: "{{ '100 GB' | human_to_bytes }}"
        - disk_slot: 0
          type: ide_cdrom
          iso_name: TinyCore-vm.iso
        - disk_slot: 1
          tiering_priority_factor: 1
          type: ide_disk
          size: "{{ '10.1 GB' | human_to_bytes }}"
    register: testout
  - name: Show output
    debug:
      var: testout

  - name: Wait N sec - tieringPriorityFactor should change
    ansible.builtin.pause:
      seconds: 30

  - name: Change tiering prio on XLAB-test-tiering-prio-VM-UI. (THIRD TIME) - Should be idempotent by now?
    scale_computing.hypercore.vm_disk:
      cluster_instance:
        host: ***********
        username: ***********
        password: ***********
      state: set
      vm_name: XLAB-test-tiering-prio-VM-UI
      items:
        - disk_slot: 0
          tiering_priority_factor: 1
          type: virtio_disk
          size: "{{ '100 GB' | human_to_bytes }}"
        - disk_slot: 0
          type: ide_cdrom
          iso_name: TinyCore-vm.iso
        - disk_slot: 1
          tiering_priority_factor: 1
          type: ide_disk
          size: "{{ '10.1 GB' | human_to_bytes }}"
    register: testout
  - name: Show output
    debug:
      var: testout

@domendobnikar domendobnikar unassigned ghost Nov 14, 2022
@ddemlow
Copy link
Member

ddemlow commented Nov 14, 2022

I will log a ticket to confirm / address this on hypercore rest api as that appears to be a bug ... would it still be possible / make sense to have the module be aware of this api behaviors to make it idempotent? create the disk, wait for that api task to complete and then set tiering priority? (similar to how other multi-step tasks may be handled under the hood by ansible module - like deleting a powered on VM as one example ... power off first, wait, then delete) - (internal reference on rest api issue issues/5143)

@dradX dradX removed the bug Something isn't working label Nov 18, 2022
@ddemlow
Copy link
Member

ddemlow commented Dec 6, 2022

waiting for scale REST API fix

@ddemlow ddemlow added waiting_api waiting on Scale API change and removed todo Selected for development labels Dec 6, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
waiting_api waiting on Scale API change
Projects
None yet
Development

No branches or pull requests

4 participants