Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

supervisor-and-application race condition - there is no way to handle…#655

Closed
tomjoro wants to merge 1 commit intoelixir-lang:masterelixir-lang/elixir-lang.github.com:masterfrom
tomjoro:mastertomjoro/elixir-lang.github.com:masterCopy head branch name to clipboard
Closed

supervisor-and-application race condition - there is no way to handle…#655
tomjoro wants to merge 1 commit intoelixir-lang:masterelixir-lang/elixir-lang.github.com:masterfrom
tomjoro:mastertomjoro/elixir-lang.github.com:masterCopy head branch name to clipboard

Conversation

@tomjoro
Copy link

@tomjoro tomjoro commented Jan 7, 2016

… in this chapter so a sleep (hack) is used and with a note to user. I can't think of any better solution, other than a spin-wait for the bucket to disappear. Otherwise you'd need to monitor the supervisor, or have it send another message when the DOWN message has been handled.

… in this chapter so a sleep (hack) is used and with a note
@josevalim
Copy link
Member

Thank you. I have also pushed a fix that relies on monitor + DOWN. :)

@josevalim josevalim closed this Jan 7, 2016
@tomjoro
Copy link
Author

tomjoro commented Jan 7, 2016

Your fix is to the ETS chapter - and that works. But my PR is for the supervisor chapter. If you just do the work up the end of the Supervisor chapter then the test cases will fail (which is what happened to me and others) so some correction is needed in this chapter also.

@tomjoro
Copy link
Author

tomjoro commented Jan 7, 2016

sorry, my mistake.. now I see your change. But I'm still not sure if waiting for a message in another monitor will actually guarantee the message is processed in the Registry, but it does work.

@josevalim
Copy link
Member

@tomjoro it guarantees. All down messages are guaranteed to be delivered at the same time. :) So if you received it, it is guaranteed to also be in the other process inbox.

@tomjoro
Copy link
Author

tomjoro commented Jan 7, 2016

oh, that's handy :) .. does that mean then that the code to create bogus bucket should be put back into the example, because the code in Registry that handles must run to completion before we try to check if the bucket exists (because it is in a Map managed by Registry).

@tomjoro
Copy link
Author

tomjoro commented Jan 7, 2016

The registry lookup is also a message, so it would be queued after the :DOWN, and hence synchronized (I assume handle_info uses the same queue) Thanks! I get it now. It'll work always...

@josevalim
Copy link
Member

The registry lookup is also a message, so it would be queued after the :DOWN, and hence synchronized (I assume handle_info uses the same queue) Thanks! I get it now. It'll work always...

You got it! :D

@tomjoro
Copy link
Author

tomjoro commented Jan 7, 2016

IMO that's a super important feature (especially for testing) and maybe deserves mentioning.. "All :DOWN messages are guaranteed to be delivered at the same time". It seems a common testing scenario when doing third party (can I call that?) monitoring like this. Some of my Erlang friends didn't know this.

@josevalim
Copy link
Member

@fishcakez do you agree we should make it clear in the guides are DOWN messages are delivered at once?

@fishcakez
Copy link
Member

That guarantee does not exist, DOWN are sent async, one after other and it costs reductions as normal AFAIK.

@josevalim
Copy link
Member

Damn it. Are links the ones that are guaranteed? Is there any way to guarantee the process may have a received a DOWN notification? Or are they delivered in order (the first to monitor is the first to receive it)?

@fishcakez
Copy link
Member

Links are the same as monitors. The only guarantee is links are sent before monitors: https://docs.google.com/presentation/d/1dRUQ1jC0NXRyM-6WJWo15n-3hSZ1p7F8fl3X4bdhis8/pub?start=false&loop=false&delayms=60000#slide=id.g34cd3d110_0282

@josevalim
Copy link
Member

Thanks, I guess we cannot rely on any ordering guarantee either. I will think about it more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.