Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cloudformation Dependency #9

Open
pilgrim2go opened this issue Apr 3, 2017 · 6 comments
Open

Cloudformation Dependency #9

pilgrim2go opened this issue Apr 3, 2017 · 6 comments
Labels

Comments

@pilgrim2go
Copy link

pilgrim2go commented Apr 3, 2017

Hi,

I'm using Aloisius 0.4.
I have following Cloudformation template ( Using troposphere)

base.py

iam.py

user.py

That is IAM template is depending on Base template
and User template depends on IAM template.

With new 'waiters' feature, can Aloisius implement such dependency chaining?

I'm digging the code but haven't figured it out yet.

Thanks again for your share.

@adonig
Copy link
Owner

adonig commented Apr 3, 2017

Hi @pilgrim2go,

please have a look at the example code in the README.md file. You should be able to import your troposphere templates and then setup a hierarchy by declaratively adding your stacks to the file and chaining the stacks using the outputs. There is a dependency between two stacks, if one stack uses an output of another stack as input parameter. If there is a dependency aloisius executes the operations on those stacks in a sequential manner. If there is no dependency between two stacks, the parallelisation should happen automatically :-)

Best
Andreas

@pilgrim2go
Copy link
Author

@adonig:
As far as I know, if you have 5 Stack and adjust the following line

in stack.py

    _executor = ThreadPoolExecutor(max_workers=3)

You'll see unexpected result ( eg some threads lost).

Due to myy lack of concurrency experience, I can't show the reason why yet. But each ThreadPool per Stack looks weird to me at first.

Besides, I can't find any line of code can specify Stack dependency. Such feature is really needed.

@adonig
Copy link
Owner

adonig commented Apr 4, 2017

Oh, no no no. It is automatically set to the number of CPUs your computer has:

_executor = ThreadPoolExecutor(max_workers=multiprocessing.cpu_count())

That makes sense because your computer can run that many tasks in parallel.

Stack dependencies are currently specified implicitly over stack outputs and stack input parameters. Here's an example:

stack1 = Stack(...)
stack2 = Stack(..., Parameters={'input1': stack1.outputs['foo']})
stack3 = Stack(...)

In that scenario, the operations on stack1 and stack3 are executed in parallel, while the operation on stack2 has to wait until the operation on stack1 is finished (because otherwise the outputs of stack1 would not exist).

@diasjorge
Copy link
Contributor

If you don't use the outputs you could force blocking using aloisius.stacks.wait() before executing the next one

@adonig adonig added the question label Apr 4, 2017
@adonig
Copy link
Owner

adonig commented Apr 5, 2017

If there is use for it, we could think about adding a constructor parameter DependsOn so it would be possible to write something like:

stack1 = Stack(...)
stack2 = Stack(..., DependsOn=[stack1])

@pilgrim2go
Copy link
Author

@adonig : Yes, I prefer DependsOn solution. That makes Stack definition more clearer and we can build complex dependency graph.
Also I 1thought multiprocessing.cpu_count() is for process ( vs thread). Since my cpu_count is 4 that why explicitly set number=4 is the same.

Btw, I ended up using your solution to do something simple ( my case).

sc = StackCollection()
s1 = Stack('vpc')
s2 = Stack('cluster', depends=s1).

sc.add(s1)
sc.add(s2)

and then I can do something like
sc.runall()
sc.delete() 
or
sc.runall(queryset=QuerySet(__name__'vpc')

Code is copied from Alosius ( worked for me but not a generic solution)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants