-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
net: lwm2m: Add callback to report observers added/deleted and notification ack/timeout #38934
net: lwm2m: Add callback to report observers added/deleted and notification ack/timeout #38934
Conversation
…cks and timeouts, and observe adds and deletes
Two things crossed my mind while working on this:
|
Thanks for picking this up! @JPSELC pointed out in #38531 that the callback would need to report the token to the user, and I'm inclined to agree with him. |
Ah, I read it and didn't think about it anymore. In my mind I thought it would be fine to just send one notification at a time, perhaps per path. I see you and @JPSELC 's point though, so I will change it. Luckily for me, I believe it's available in all 4 places I added the callback anyway! I have 2 follow-up questions to make sure I understand you correctly:
|
Just had a quick look, it looks to me like notifications for the same observations will always have the same token. So this won't work for the use case I described anyway, in which case I believe the API is fine as is. But somebody should double check, I might just be missing something. :) |
Repeat sends of the same notification will have the same token, as far as I know, but that is not the same as a new notification from the same path. |
For my use-case, and indeed the notification storing it would be okay to limit it. However, we might be going beyond spec in that case, as in, we maybe shouldn't impose our own limitations on the client. Since the tokens are generated per coap message, so for every notification instance, they remain the same for retransmissions because it's still the same coap message. New notifications will be new coap messages with new tokens. Hence, it should be possible to allow multiple notifications per path to be active at the same time. We could make a generic callback that says 'all notifications are done', or have the application keep track of it itself using the callbacks currently in the commits of this PR. In the latter case, it would need to do bookkeeping of the tokens itself I guess. Instead of the path, or on top of the path, or the token, or whatever arguments we agree to pass on, we could also extend it with a user data pointer so each application can do something smart for buffering or other purposes itself. We probably won't need all those arguments, but I'm sure we can think of something nice and usable for all of us! For me personally, I would like to e.g. store temperature measurements with a timestamp, and send those as a bundled (level 2) notification. Because they are timestamped, the order wouldn't even be important. I would have to be able to pair a certain buffered entry [temperature, timestamp] to the |
Not for notifications though, they explicitly set the token to the same as the observation. That is, unless I'm misunderstanding the code. |
Ah yes, I think that's the case. |
This is probably just a convenience thing though to easily look up the original observation, and we could "fix" this by adding a explicit link between original Notification-Msg -> Observation-Token. (at the cost of memory consumption, obviously) |
One side note: |
I just checked, and it's the standard (V1.0.2) Section 8.2.6 indicates "Token of CoAP layer is used to match the asynchronous |
Great discussion guys, thanks for the clarification. Personally I would like to propose to add a pointer to a user object to the notifications. That way, on timeout or ACK, we can notify the application. The user object e.g. could be a pointer to a structure { measurement, timestamp } or whatever. The application could even add a unique identifier to that structure. I think this would solve the data buffering case quite well, leaving storage to the users and just having LwM2M as a transport protocol. One could have a list of measurements in e.g. RAM and get feedback on which ones are done and which ones are not. That list would indicate cleanly the number of live notifications, and buffer the data until ACKed. One drawback I do see is that the data would be duplicate, as it's in the notifications in the LwM2M engine, and in the user application. As for the use-cases described in #38531, I think this user object way could work as well? |
This is the messages array variable in lwm2m_engine.c static struct lwm2m_message messages[CONFIG_LWM2M_ENGINE_MAX_MESSAGES]; which limits the maximum number of messages that are "active", and so limits the memory used by Zephyr for notifications. I think this may default to 3 or something like that, but obviously is configurable. zephyr/subsys/net/lib/lwm2m/lwm2m_engine.c Line 4236 in d75b7a6
|
For everyone interested:
|
Right now notifications will wait for space for new messages to be available and create them at that point. |
Hey, I was not able to dig in the detail into the code yet, but just to clarify:
|
Can the title of this PR be updated please, so it was clear what it's about? |
No description provided.