You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There are a number of rules which rely on a user being logged into the device or other factors which can cause the rule to fail or return an unpredictable result.
Intended users
Users who are running the compliance script under automated conditions such as an MDM where the script run is initiated by a LaunchDaemon and may run prior to user login.
Further details
In my use case, we run the compliance script as part of a daily policy in Jamf Pro. Policies run in this way are run via a LaunchDaemon as root and do not depend on a user to be logged in. Other MDM's work similarly as this ensures inventory collection can occur even on in frequently used machines.
My current workaround for this is to create custom version of each of the affected rules. For example, my custom os_show_filename_extensions_enable check is as follows
check: | if [[ -z "$CURRENT_USER" ]]; then if [[ $($plb -c "print os_show_filename_extensions_enable:finding" $audit_plist 2>/dev/null) == 'false' ]]; then echo 1 fi else /usr/bin/sudo -u "$CURRENT_USER" /usr/bin/defaults read .GlobalPreferences AppleShowAllExtensions 2>/dev/null fi
This works well but it requires a separate implementation for each rule and complicates the "important" logic portion of the rule.
Proposal
Ideally, I would like the script to retrieve the previous run's finding result from the generated plist if the check portion returns any sort of error (i.e., non 0 exit code). This might require a slight rework of multiline check scripts as the final return code would need to be non-0.
Documentation
I don't believe there is much documentation around the specifics of how rules are checked a successful execution. At the moment, it does not appear that they are.
Testing
Introducing this change could potentially cause rules (especially custom rules) which return a non-0 exit code even when run successfully to fail. This would be a benefit long term as it would provide for a more reliable experience and possibly even an additional metric when testing rules.
What does success look like, and how can we measure that?
Success would be for rules which require special conditions to fail gracefully and refrain from reporting inaccurate information.
That's a good point, and I like the idea of reading the lastUserName value as a second try. Failing that, I think ideally the result of a failed check - where there is no previous value - would be a user defined choice.
So in my example above, I implicitly am choosing that any time a previous run is not available to consider it a finding. But I could also see someone thinking the other way and wanting to only be warned of noncompliance if a finding is legitimate.
Would it make sense to make the default behavior an optional flag passed to the generate_guidance.py script? I'd suggest a default value of counting failures as a finding in order to better match the current behavior. Then if someone wished for a more lax behavior they would have an option to override.
All that said, if there is any interest in giving this idea a shot, I would be happy to work on a proof of concept and make a pull request.
Problem to solve
There are a number of rules which rely on a user being logged into the device or other factors which can cause the rule to fail or return an unpredictable result.
Intended users
Users who are running the compliance script under automated conditions such as an MDM where the script run is initiated by a LaunchDaemon and may run prior to user login.
Further details
In my use case, we run the compliance script as part of a daily policy in Jamf Pro. Policies run in this way are run via a LaunchDaemon as root and do not depend on a user to be logged in. Other MDM's work similarly as this ensures inventory collection can occur even on in frequently used machines.
My current workaround for this is to create custom version of each of the affected rules. For example, my custom os_show_filename_extensions_enable check is as follows
This works well but it requires a separate implementation for each rule and complicates the "important" logic portion of the rule.
Proposal
Ideally, I would like the script to retrieve the previous run's finding result from the generated plist if the check portion returns any sort of error (i.e., non 0 exit code). This might require a slight rework of multiline check scripts as the final return code would need to be non-0.
Documentation
I don't believe there is much documentation around the specifics of how rules are checked a successful execution. At the moment, it does not appear that they are.
Testing
Introducing this change could potentially cause rules (especially custom rules) which return a non-0 exit code even when run successfully to fail. This would be a benefit long term as it would provide for a more reliable experience and possibly even an additional metric when testing rules.
What does success look like, and how can we measure that?
Success would be for rules which require special conditions to fail gracefully and refrain from reporting inaccurate information.
Examples of such rules are:
Links / references
The text was updated successfully, but these errors were encountered: