tracing updates for v6.14:
- Cleanup with guard() and free() helpers There were several places in the code that had a lot of "goto out" in the error paths to either unlock a lock or free some memory that was allocated. But this is error prone. Convert the code over to use the guard() and free() helpers that let the compiler unlock locks or free memory when the function exits. - Update the Rust tracepoint code to use the C code too There was some duplication of the tracepoint code for Rust that did the same logic as the C code. Add a helper that makes it possible for both algorithms to use the same logic in one place. - Add poll to trace event hist files It is useful to know when an event is triggered, or even with some filtering. Since hist files of events get updated when active and the event is triggered, allow applications to poll the hist file and wake up when an event is triggered. This will let the application know that the event it is waiting for happened. - Add :mod: command to enable events for current or future modules The function tracer already has a way to enable functions to be traced in modules by writing ":mod:<module>" into set_ftrace_filter. That will enable either all the functions for the module if it is loaded, or if it is not, it will cache that command, and when the module is loaded that matches <module>, its functions will be enabled. This also allows init functions to be traced. But currently events do not have that feature. Add the command where if ':mod:<module>' is written into set_event, then either all the modules events are enabled if it is loaded, or cache it so that the module's events are enabled when it is loaded. This also works from the kernel command line, where "trace_event=:mod:<module>", when the module is loaded at boot up, its events will be enabled then. -----BEGIN PGP SIGNATURE----- iIoEABYIADIWIQRRSw7ePDh/lE+zeZMp5XQQmuv6qgUCZ5EbMxQccm9zdGVkdEBn b29kbWlzLm9yZwAKCRAp5XQQmuv6qkZsAP9Amgx9frSbR1pn1t0I3wVnQx7khgOu s/b8Ro+vjTx1/QD/RN2AA7f+HK4F27w3Aqfrs0nKXAPtXWsJ9Epp8raG5w8= =Pg+4 -----END PGP SIGNATURE----- Merge tag 'trace-v6.14-3' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace Pull tracing updates from Steven Rostedt: - Cleanup with guard() and free() helpers There were several places in the code that had a lot of "goto out" in the error paths to either unlock a lock or free some memory that was allocated. But this is error prone. Convert the code over to use the guard() and free() helpers that let the compiler unlock locks or free memory when the function exits. - Update the Rust tracepoint code to use the C code too There was some duplication of the tracepoint code for Rust that did the same logic as the C code. Add a helper that makes it possible for both algorithms to use the same logic in one place. - Add poll to trace event hist files It is useful to know when an event is triggered, or even with some filtering. Since hist files of events get updated when active and the event is triggered, allow applications to poll the hist file and wake up when an event is triggered. This will let the application know that the event it is waiting for happened. - Add :mod: command to enable events for current or future modules The function tracer already has a way to enable functions to be traced in modules by writing ":mod:<module>" into set_ftrace_filter. That will enable either all the functions for the module if it is loaded, or if it is not, it will cache that command, and when the module is loaded that matches <module>, its functions will be enabled. This also allows init functions to be traced. But currently events do not have that feature. Add the command where if ':mod:<module>' is written into set_event, then either all the modules events are enabled if it is loaded, or cache it so that the module's events are enabled when it is loaded. This also works from the kernel command line, where "trace_event=:mod:<module>", when the module is loaded at boot up, its events will be enabled then. * tag 'trace-v6.14-3' of git://git.kernel.org/pub/scm/linux/kernel/git/trace/linux-trace: (26 commits) tracing: Fix output of set_event for some cached module events tracing: Fix allocation of printing set_event file content tracing: Rename update_cache() to update_mod_cache() tracing: Fix #if CONFIG_MODULES to #ifdef CONFIG_MODULES selftests/ftrace: Add test that tests event :mod: commands tracing: Cache ":mod:" events for modules not loaded yet tracing: Add :mod: command to enabled module events selftests/tracing: Add hist poll() support test tracing/hist: Support POLLPRI event for poll on histogram tracing/hist: Add poll(POLLIN) support on hist file tracing: Fix using ret variable in tracing_set_tracer() tracepoint: Reduce duplication of __DO_TRACE_CALL tracing/string: Create and use __free(argv_free) in trace_dynevent.c tracing: Switch trace_stat.c code over to use guard() tracing: Switch trace_stack.c code over to use guard() tracing: Switch trace_osnoise.c code over to use guard() and __free() tracing: Switch trace_events_synth.c code over to use guard() tracing: Switch trace_events_filter.c code over to use guard() tracing: Switch trace_events_trigger.c code over to use guard() tracing: Switch trace_events_hist.c code over to use guard() ...
This commit is contained in:
commit
e8744fbc83
21 changed files with 1054 additions and 482 deletions
|
@ -7161,6 +7161,14 @@
|
|||
comma-separated list of trace events to enable. See
|
||||
also Documentation/trace/events.rst
|
||||
|
||||
To enable modules, use :mod: keyword:
|
||||
|
||||
trace_event=:mod:<module>
|
||||
|
||||
The value before :mod: will only enable specific events
|
||||
that are part of the module. See the above mentioned
|
||||
document for more information.
|
||||
|
||||
trace_instance=[instance-info]
|
||||
[FTRACE] Create a ring buffer instance early in boot up.
|
||||
This will be listed in:
|
||||
|
|
|
@ -55,6 +55,30 @@ command::
|
|||
|
||||
# echo 'irq:*' > /sys/kernel/tracing/set_event
|
||||
|
||||
The set_event file may also be used to enable events associated to only
|
||||
a specific module::
|
||||
|
||||
# echo ':mod:<module>' > /sys/kernel/tracing/set_event
|
||||
|
||||
Will enable all events in the module ``<module>``. If the module is not yet
|
||||
loaded, the string will be saved and when a module is that matches ``<module>``
|
||||
is loaded, then it will apply the enabling of events then.
|
||||
|
||||
The text before ``:mod:`` will be parsed to specify specific events that the
|
||||
module creates::
|
||||
|
||||
# echo '<match>:mod:<module>' > /sys/kernel/tracing/set_event
|
||||
|
||||
The above will enable any system or event that ``<match>`` matches. If
|
||||
``<match>`` is ``"*"`` then it will match all events.
|
||||
|
||||
To enable only a specific event within a system::
|
||||
|
||||
# echo '<system>:<event>:mod:<module>' > /sys/kernel/tracing/set_event
|
||||
|
||||
If ``<event>`` is ``"*"`` then it will match all events within the system
|
||||
for a given module.
|
||||
|
||||
2.2 Via the 'enable' toggle
|
||||
---------------------------
|
||||
|
||||
|
|
|
@ -4,6 +4,7 @@
|
|||
|
||||
#include <linux/args.h>
|
||||
#include <linux/array_size.h>
|
||||
#include <linux/cleanup.h> /* for DEFINE_FREE() */
|
||||
#include <linux/compiler.h> /* for inline */
|
||||
#include <linux/types.h> /* for size_t */
|
||||
#include <linux/stddef.h> /* for NULL */
|
||||
|
@ -312,6 +313,8 @@ extern void *kmemdup_array(const void *src, size_t count, size_t element_size, g
|
|||
extern char **argv_split(gfp_t gfp, const char *str, int *argcp);
|
||||
extern void argv_free(char **argv);
|
||||
|
||||
DEFINE_FREE(argv_free, char **, if (!IS_ERR_OR_NULL(_T)) argv_free(_T))
|
||||
|
||||
/* lib/cmdline.c */
|
||||
extern int get_option(char **str, int *pint);
|
||||
extern char *get_options(const char *str, int nints, int *ints);
|
||||
|
|
|
@ -673,6 +673,20 @@ struct trace_event_file {
|
|||
atomic_t tm_ref; /* trigger-mode reference counter */
|
||||
};
|
||||
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
extern struct irq_work hist_poll_work;
|
||||
extern wait_queue_head_t hist_poll_wq;
|
||||
|
||||
static inline void hist_poll_wakeup(void)
|
||||
{
|
||||
if (wq_has_sleeper(&hist_poll_wq))
|
||||
irq_work_queue(&hist_poll_work);
|
||||
}
|
||||
|
||||
#define hist_poll_wait(file, wait) \
|
||||
poll_wait(file, &hist_poll_wq, wait)
|
||||
#endif
|
||||
|
||||
#define __TRACE_EVENT_FLAGS(name, value) \
|
||||
static int __init trace_init_flags_##name(void) \
|
||||
{ \
|
||||
|
|
|
@ -218,7 +218,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
|||
#define __DEFINE_RUST_DO_TRACE(name, proto, args) \
|
||||
notrace void rust_do_trace_##name(proto) \
|
||||
{ \
|
||||
__rust_do_trace_##name(args); \
|
||||
__do_trace_##name(args); \
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -268,7 +268,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
|||
|
||||
#define __DECLARE_TRACE(name, proto, args, cond, data_proto) \
|
||||
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
|
||||
static inline void __rust_do_trace_##name(proto) \
|
||||
static inline void __do_trace_##name(proto) \
|
||||
{ \
|
||||
if (cond) { \
|
||||
guard(preempt_notrace)(); \
|
||||
|
@ -277,12 +277,8 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
|||
} \
|
||||
static inline void trace_##name(proto) \
|
||||
{ \
|
||||
if (static_branch_unlikely(&__tracepoint_##name.key)) { \
|
||||
if (cond) { \
|
||||
guard(preempt_notrace)(); \
|
||||
__DO_TRACE_CALL(name, TP_ARGS(args)); \
|
||||
} \
|
||||
} \
|
||||
if (static_branch_unlikely(&__tracepoint_##name.key)) \
|
||||
__do_trace_##name(args); \
|
||||
if (IS_ENABLED(CONFIG_LOCKDEP) && (cond)) { \
|
||||
WARN_ONCE(!rcu_is_watching(), \
|
||||
"RCU not watching for tracepoint"); \
|
||||
|
@ -291,7 +287,7 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
|||
|
||||
#define __DECLARE_TRACE_SYSCALL(name, proto, args, data_proto) \
|
||||
__DECLARE_TRACE_COMMON(name, PARAMS(proto), PARAMS(args), PARAMS(data_proto)) \
|
||||
static inline void __rust_do_trace_##name(proto) \
|
||||
static inline void __do_trace_##name(proto) \
|
||||
{ \
|
||||
guard(rcu_tasks_trace)(); \
|
||||
__DO_TRACE_CALL(name, TP_ARGS(args)); \
|
||||
|
@ -299,10 +295,8 @@ static inline struct tracepoint *tracepoint_ptr_deref(tracepoint_ptr_t *p)
|
|||
static inline void trace_##name(proto) \
|
||||
{ \
|
||||
might_fault(); \
|
||||
if (static_branch_unlikely(&__tracepoint_##name.key)) { \
|
||||
guard(rcu_tasks_trace)(); \
|
||||
__DO_TRACE_CALL(name, TP_ARGS(args)); \
|
||||
} \
|
||||
if (static_branch_unlikely(&__tracepoint_##name.key)) \
|
||||
__do_trace_##name(args); \
|
||||
if (IS_ENABLED(CONFIG_LOCKDEP)) { \
|
||||
WARN_ONCE(!rcu_is_watching(), \
|
||||
"RCU not watching for tracepoint"); \
|
||||
|
|
|
@ -4908,23 +4908,6 @@ static int ftrace_hash_move_and_update_ops(struct ftrace_ops *ops,
|
|||
return __ftrace_hash_move_and_update_ops(ops, orig_hash, hash, enable);
|
||||
}
|
||||
|
||||
static bool module_exists(const char *module)
|
||||
{
|
||||
/* All modules have the symbol __this_module */
|
||||
static const char this_mod[] = "__this_module";
|
||||
char modname[MAX_PARAM_PREFIX_LEN + sizeof(this_mod) + 2];
|
||||
unsigned long val;
|
||||
int n;
|
||||
|
||||
n = snprintf(modname, sizeof(modname), "%s:%s", module, this_mod);
|
||||
|
||||
if (n > sizeof(modname) - 1)
|
||||
return false;
|
||||
|
||||
val = module_kallsyms_lookup_name(modname);
|
||||
return val != 0;
|
||||
}
|
||||
|
||||
static int cache_mod(struct trace_array *tr,
|
||||
const char *func, char *module, int enable)
|
||||
{
|
||||
|
|
|
@ -26,6 +26,7 @@
|
|||
#include <linux/hardirq.h>
|
||||
#include <linux/linkage.h>
|
||||
#include <linux/uaccess.h>
|
||||
#include <linux/cleanup.h>
|
||||
#include <linux/vmalloc.h>
|
||||
#include <linux/ftrace.h>
|
||||
#include <linux/module.h>
|
||||
|
@ -535,19 +536,16 @@ LIST_HEAD(ftrace_trace_arrays);
|
|||
int trace_array_get(struct trace_array *this_tr)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
int ret = -ENODEV;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (tr == this_tr) {
|
||||
tr->ref++;
|
||||
ret = 0;
|
||||
break;
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return ret;
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
static void __trace_array_put(struct trace_array *this_tr)
|
||||
|
@ -1443,22 +1441,20 @@ EXPORT_SYMBOL_GPL(tracing_snapshot_alloc);
|
|||
int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data,
|
||||
cond_update_fn_t update)
|
||||
{
|
||||
struct cond_snapshot *cond_snapshot;
|
||||
int ret = 0;
|
||||
struct cond_snapshot *cond_snapshot __free(kfree) =
|
||||
kzalloc(sizeof(*cond_snapshot), GFP_KERNEL);
|
||||
int ret;
|
||||
|
||||
cond_snapshot = kzalloc(sizeof(*cond_snapshot), GFP_KERNEL);
|
||||
if (!cond_snapshot)
|
||||
return -ENOMEM;
|
||||
|
||||
cond_snapshot->cond_data = cond_data;
|
||||
cond_snapshot->update = update;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
if (tr->current_trace->use_max_tr) {
|
||||
ret = -EBUSY;
|
||||
goto fail_unlock;
|
||||
}
|
||||
if (tr->current_trace->use_max_tr)
|
||||
return -EBUSY;
|
||||
|
||||
/*
|
||||
* The cond_snapshot can only change to NULL without the
|
||||
|
@ -1468,29 +1464,20 @@ int tracing_snapshot_cond_enable(struct trace_array *tr, void *cond_data,
|
|||
* do safely with only holding the trace_types_lock and not
|
||||
* having to take the max_lock.
|
||||
*/
|
||||
if (tr->cond_snapshot) {
|
||||
ret = -EBUSY;
|
||||
goto fail_unlock;
|
||||
}
|
||||
if (tr->cond_snapshot)
|
||||
return -EBUSY;
|
||||
|
||||
ret = tracing_arm_snapshot_locked(tr);
|
||||
if (ret)
|
||||
goto fail_unlock;
|
||||
return ret;
|
||||
|
||||
local_irq_disable();
|
||||
arch_spin_lock(&tr->max_lock);
|
||||
tr->cond_snapshot = cond_snapshot;
|
||||
tr->cond_snapshot = no_free_ptr(cond_snapshot);
|
||||
arch_spin_unlock(&tr->max_lock);
|
||||
local_irq_enable();
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return ret;
|
||||
|
||||
fail_unlock:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
kfree(cond_snapshot);
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(tracing_snapshot_cond_enable);
|
||||
|
||||
|
@ -2203,10 +2190,10 @@ static __init int init_trace_selftests(void)
|
|||
|
||||
selftests_can_run = true;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
if (list_empty(&postponed_selftests))
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
pr_info("Running postponed tracer tests:\n");
|
||||
|
||||
|
@ -2235,9 +2222,6 @@ static __init int init_trace_selftests(void)
|
|||
}
|
||||
tracing_selftest_running = false;
|
||||
|
||||
out:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
core_initcall(init_trace_selftests);
|
||||
|
@ -2807,7 +2791,7 @@ int tracepoint_printk_sysctl(const struct ctl_table *table, int write,
|
|||
int save_tracepoint_printk;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&tracepoint_printk_mutex);
|
||||
guard(mutex)(&tracepoint_printk_mutex);
|
||||
save_tracepoint_printk = tracepoint_printk;
|
||||
|
||||
ret = proc_dointvec(table, write, buffer, lenp, ppos);
|
||||
|
@ -2820,16 +2804,13 @@ int tracepoint_printk_sysctl(const struct ctl_table *table, int write,
|
|||
tracepoint_printk = 0;
|
||||
|
||||
if (save_tracepoint_printk == tracepoint_printk)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
if (tracepoint_printk)
|
||||
static_key_enable(&tracepoint_printk_key.key);
|
||||
else
|
||||
static_key_disable(&tracepoint_printk_key.key);
|
||||
|
||||
out:
|
||||
mutex_unlock(&tracepoint_printk_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -5127,7 +5108,8 @@ static int tracing_trace_options_show(struct seq_file *m, void *v)
|
|||
u32 tracer_flags;
|
||||
int i;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
tracer_flags = tr->current_trace->flags->val;
|
||||
trace_opts = tr->current_trace->flags->opts;
|
||||
|
||||
|
@ -5144,7 +5126,6 @@ static int tracing_trace_options_show(struct seq_file *m, void *v)
|
|||
else
|
||||
seq_printf(m, "no%s\n", trace_opts[i].name);
|
||||
}
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -5537,6 +5518,8 @@ static const char readme_msg[] =
|
|||
"\t efield: For event probes ('e' types), the field is on of the fields\n"
|
||||
"\t of the <attached-group>/<attached-event>.\n"
|
||||
#endif
|
||||
" set_event\t\t- Enables events by name written into it\n"
|
||||
"\t\t\t Can enable module events via: :mod:<module>\n"
|
||||
" events/\t\t- Directory containing all trace event subsystems:\n"
|
||||
" enable\t\t- Write 0/1 to enable/disable tracing of all events\n"
|
||||
" events/<system>/\t- Directory containing all trace events for <system>:\n"
|
||||
|
@ -5809,7 +5792,7 @@ trace_insert_eval_map_file(struct module *mod, struct trace_eval_map **start,
|
|||
return;
|
||||
}
|
||||
|
||||
mutex_lock(&trace_eval_mutex);
|
||||
guard(mutex)(&trace_eval_mutex);
|
||||
|
||||
if (!trace_eval_maps)
|
||||
trace_eval_maps = map_array;
|
||||
|
@ -5833,8 +5816,6 @@ trace_insert_eval_map_file(struct module *mod, struct trace_eval_map **start,
|
|||
map_array++;
|
||||
}
|
||||
memset(map_array, 0, sizeof(*map_array));
|
||||
|
||||
mutex_unlock(&trace_eval_mutex);
|
||||
}
|
||||
|
||||
static void trace_create_eval_file(struct dentry *d_tracer)
|
||||
|
@ -5998,23 +5979,18 @@ ssize_t tracing_resize_ring_buffer(struct trace_array *tr,
|
|||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
if (cpu_id != RING_BUFFER_ALL_CPUS) {
|
||||
/* make sure, this cpu is enabled in the mask */
|
||||
if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (!cpumask_test_cpu(cpu_id, tracing_buffer_mask))
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = __tracing_resize_ring_buffer(tr, size, cpu_id);
|
||||
if (ret < 0)
|
||||
ret = -ENOMEM;
|
||||
|
||||
out:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -6106,9 +6082,9 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
|
|||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
bool had_max_tr;
|
||||
#endif
|
||||
int ret = 0;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
update_last_data(tr);
|
||||
|
||||
|
@ -6116,7 +6092,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
|
|||
ret = __tracing_resize_ring_buffer(tr, trace_buf_size,
|
||||
RING_BUFFER_ALL_CPUS);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
return ret;
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
|
@ -6124,43 +6100,37 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
|
|||
if (strcmp(t->name, buf) == 0)
|
||||
break;
|
||||
}
|
||||
if (!t) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (!t)
|
||||
return -EINVAL;
|
||||
|
||||
if (t == tr->current_trace)
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
#ifdef CONFIG_TRACER_SNAPSHOT
|
||||
if (t->use_max_tr) {
|
||||
local_irq_disable();
|
||||
arch_spin_lock(&tr->max_lock);
|
||||
if (tr->cond_snapshot)
|
||||
ret = -EBUSY;
|
||||
ret = tr->cond_snapshot ? -EBUSY : 0;
|
||||
arch_spin_unlock(&tr->max_lock);
|
||||
local_irq_enable();
|
||||
if (ret)
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
#endif
|
||||
/* Some tracers won't work on kernel command line */
|
||||
if (system_state < SYSTEM_RUNNING && t->noboot) {
|
||||
pr_warn("Tracer '%s' is not allowed on command line, ignored\n",
|
||||
t->name);
|
||||
goto out;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
/* Some tracers are only allowed for the top level buffer */
|
||||
if (!trace_ok_for_array(t, tr)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (!trace_ok_for_array(t, tr))
|
||||
return -EINVAL;
|
||||
|
||||
/* If trace pipe files are being read, we can't change the tracer */
|
||||
if (tr->trace_ref) {
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
if (tr->trace_ref)
|
||||
return -EBUSY;
|
||||
|
||||
trace_branch_disable();
|
||||
|
||||
|
@ -6191,7 +6161,7 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
|
|||
if (!had_max_tr && t->use_max_tr) {
|
||||
ret = tracing_arm_snapshot_locked(tr);
|
||||
if (ret)
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
#else
|
||||
tr->current_trace = &nop_trace;
|
||||
|
@ -6204,17 +6174,15 @@ int tracing_set_tracer(struct trace_array *tr, const char *buf)
|
|||
if (t->use_max_tr)
|
||||
tracing_disarm_snapshot(tr);
|
||||
#endif
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
}
|
||||
|
||||
tr->current_trace = t;
|
||||
tr->current_trace->enabled++;
|
||||
trace_branch_enable(tr);
|
||||
out:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
|
@ -6292,22 +6260,18 @@ tracing_thresh_write(struct file *filp, const char __user *ubuf,
|
|||
struct trace_array *tr = filp->private_data;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
ret = tracing_nsecs_write(&tracing_thresh, ubuf, cnt, ppos);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
if (tr->current_trace->update_thresh) {
|
||||
ret = tr->current_trace->update_thresh(tr);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = cnt;
|
||||
out:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return ret;
|
||||
return cnt;
|
||||
}
|
||||
|
||||
#ifdef CONFIG_TRACER_MAX_TRACE
|
||||
|
@ -6526,31 +6490,29 @@ tracing_read_pipe(struct file *filp, char __user *ubuf,
|
|||
* This is just a matter of traces coherency, the ring buffer itself
|
||||
* is protected.
|
||||
*/
|
||||
mutex_lock(&iter->mutex);
|
||||
guard(mutex)(&iter->mutex);
|
||||
|
||||
/* return any leftover data */
|
||||
sret = trace_seq_to_user(&iter->seq, ubuf, cnt);
|
||||
if (sret != -EBUSY)
|
||||
goto out;
|
||||
return sret;
|
||||
|
||||
trace_seq_init(&iter->seq);
|
||||
|
||||
if (iter->trace->read) {
|
||||
sret = iter->trace->read(iter, filp, ubuf, cnt, ppos);
|
||||
if (sret)
|
||||
goto out;
|
||||
return sret;
|
||||
}
|
||||
|
||||
waitagain:
|
||||
sret = tracing_wait_pipe(filp);
|
||||
if (sret <= 0)
|
||||
goto out;
|
||||
return sret;
|
||||
|
||||
/* stop when tracing is finished */
|
||||
if (trace_empty(iter)) {
|
||||
sret = 0;
|
||||
goto out;
|
||||
}
|
||||
if (trace_empty(iter))
|
||||
return 0;
|
||||
|
||||
if (cnt >= TRACE_SEQ_BUFFER_SIZE)
|
||||
cnt = TRACE_SEQ_BUFFER_SIZE - 1;
|
||||
|
@ -6614,9 +6576,6 @@ waitagain:
|
|||
if (sret == -EBUSY)
|
||||
goto waitagain;
|
||||
|
||||
out:
|
||||
mutex_unlock(&iter->mutex);
|
||||
|
||||
return sret;
|
||||
}
|
||||
|
||||
|
@ -7208,25 +7167,19 @@ u64 tracing_event_time_stamp(struct trace_buffer *buffer, struct ring_buffer_eve
|
|||
*/
|
||||
int tracing_set_filter_buffering(struct trace_array *tr, bool set)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
if (set && tr->no_filter_buffering_ref++)
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
if (!set) {
|
||||
if (WARN_ON_ONCE(!tr->no_filter_buffering_ref)) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (WARN_ON_ONCE(!tr->no_filter_buffering_ref))
|
||||
return -EINVAL;
|
||||
|
||||
--tr->no_filter_buffering_ref;
|
||||
}
|
||||
out:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
struct ftrace_buffer_info {
|
||||
|
@ -7302,12 +7255,10 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
if (tr->current_trace->use_max_tr) {
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
}
|
||||
if (tr->current_trace->use_max_tr)
|
||||
return -EBUSY;
|
||||
|
||||
local_irq_disable();
|
||||
arch_spin_lock(&tr->max_lock);
|
||||
|
@ -7316,24 +7267,20 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
|
|||
arch_spin_unlock(&tr->max_lock);
|
||||
local_irq_enable();
|
||||
if (ret)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
switch (val) {
|
||||
case 0:
|
||||
if (iter->cpu_file != RING_BUFFER_ALL_CPUS) {
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
if (iter->cpu_file != RING_BUFFER_ALL_CPUS)
|
||||
return -EINVAL;
|
||||
if (tr->allocated_snapshot)
|
||||
free_snapshot(tr);
|
||||
break;
|
||||
case 1:
|
||||
/* Only allow per-cpu swap if the ring buffer supports it */
|
||||
#ifndef CONFIG_RING_BUFFER_ALLOW_SWAP
|
||||
if (iter->cpu_file != RING_BUFFER_ALL_CPUS) {
|
||||
ret = -EINVAL;
|
||||
break;
|
||||
}
|
||||
if (iter->cpu_file != RING_BUFFER_ALL_CPUS)
|
||||
return -EINVAL;
|
||||
#endif
|
||||
if (tr->allocated_snapshot)
|
||||
ret = resize_buffer_duplicate_size(&tr->max_buffer,
|
||||
|
@ -7341,7 +7288,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
|
|||
|
||||
ret = tracing_arm_snapshot_locked(tr);
|
||||
if (ret)
|
||||
break;
|
||||
return ret;
|
||||
|
||||
/* Now, we're going to swap */
|
||||
if (iter->cpu_file == RING_BUFFER_ALL_CPUS) {
|
||||
|
@ -7368,8 +7315,7 @@ tracing_snapshot_write(struct file *filp, const char __user *ubuf, size_t cnt,
|
|||
*ppos += cnt;
|
||||
ret = cnt;
|
||||
}
|
||||
out:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -7755,12 +7701,11 @@ void tracing_log_err(struct trace_array *tr,
|
|||
|
||||
len += sizeof(CMD_PREFIX) + 2 * sizeof("\n") + strlen(cmd) + 1;
|
||||
|
||||
mutex_lock(&tracing_err_log_lock);
|
||||
guard(mutex)(&tracing_err_log_lock);
|
||||
|
||||
err = get_tracing_log_err(tr, len);
|
||||
if (PTR_ERR(err) == -ENOMEM) {
|
||||
mutex_unlock(&tracing_err_log_lock);
|
||||
if (PTR_ERR(err) == -ENOMEM)
|
||||
return;
|
||||
}
|
||||
|
||||
snprintf(err->loc, TRACING_LOG_LOC_MAX, "%s: error: ", loc);
|
||||
snprintf(err->cmd, len, "\n" CMD_PREFIX "%s\n", cmd);
|
||||
|
@ -7771,7 +7716,6 @@ void tracing_log_err(struct trace_array *tr,
|
|||
err->info.ts = local_clock();
|
||||
|
||||
list_add_tail(&err->list, &tr->err_log);
|
||||
mutex_unlock(&tracing_err_log_lock);
|
||||
}
|
||||
|
||||
static void clear_tracing_err_log(struct trace_array *tr)
|
||||
|
@ -9467,6 +9411,10 @@ trace_array_create_systems(const char *name, const char *systems,
|
|||
INIT_LIST_HEAD(&tr->hist_vars);
|
||||
INIT_LIST_HEAD(&tr->err_log);
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
INIT_LIST_HEAD(&tr->mod_events);
|
||||
#endif
|
||||
|
||||
if (allocate_trace_buffers(tr, trace_buf_size) < 0)
|
||||
goto out_free_tr;
|
||||
|
||||
|
@ -9515,20 +9463,17 @@ static int instance_mkdir(const char *name)
|
|||
struct trace_array *tr;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&event_mutex);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
ret = -EEXIST;
|
||||
if (trace_array_find(name))
|
||||
goto out_unlock;
|
||||
return -EEXIST;
|
||||
|
||||
tr = trace_array_create(name);
|
||||
|
||||
ret = PTR_ERR_OR_ZERO(tr);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -9578,24 +9523,23 @@ struct trace_array *trace_array_get_by_name(const char *name, const char *system
|
|||
{
|
||||
struct trace_array *tr;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&event_mutex);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (tr->name && strcmp(tr->name, name) == 0)
|
||||
goto out_unlock;
|
||||
if (tr->name && strcmp(tr->name, name) == 0) {
|
||||
tr->ref++;
|
||||
return tr;
|
||||
}
|
||||
}
|
||||
|
||||
tr = trace_array_create_systems(name, systems, 0, 0);
|
||||
|
||||
if (IS_ERR(tr))
|
||||
tr = NULL;
|
||||
out_unlock:
|
||||
if (tr)
|
||||
else
|
||||
tr->ref++;
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
return tr;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_get_by_name);
|
||||
|
@ -9646,48 +9590,36 @@ static int __remove_instance(struct trace_array *tr)
|
|||
int trace_array_destroy(struct trace_array *this_tr)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
int ret;
|
||||
|
||||
if (!this_tr)
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&event_mutex);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
ret = -ENODEV;
|
||||
|
||||
/* Making sure trace array exists before destroying it. */
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (tr == this_tr) {
|
||||
ret = __remove_instance(tr);
|
||||
break;
|
||||
}
|
||||
if (tr == this_tr)
|
||||
return __remove_instance(tr);
|
||||
}
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
return -ENODEV;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_destroy);
|
||||
|
||||
static int instance_rmdir(const char *name)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&event_mutex);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
ret = -ENODEV;
|
||||
tr = trace_array_find(name);
|
||||
if (tr)
|
||||
ret = __remove_instance(tr);
|
||||
if (!tr)
|
||||
return -ENODEV;
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
return __remove_instance(tr);
|
||||
}
|
||||
|
||||
static __init void create_trace_instances(struct dentry *d_tracer)
|
||||
|
@ -9700,19 +9632,16 @@ static __init void create_trace_instances(struct dentry *d_tracer)
|
|||
if (MEM_FAIL(!trace_instance_dir, "Failed to create instances directory\n"))
|
||||
return;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&event_mutex);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list) {
|
||||
if (!tr->name)
|
||||
continue;
|
||||
if (MEM_FAIL(trace_array_create_dir(tr) < 0,
|
||||
"Failed to create instance directory\n"))
|
||||
break;
|
||||
return;
|
||||
}
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
static void
|
||||
|
@ -9902,6 +9831,24 @@ late_initcall_sync(trace_eval_sync);
|
|||
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
|
||||
bool module_exists(const char *module)
|
||||
{
|
||||
/* All modules have the symbol __this_module */
|
||||
static const char this_mod[] = "__this_module";
|
||||
char modname[MAX_PARAM_PREFIX_LEN + sizeof(this_mod) + 2];
|
||||
unsigned long val;
|
||||
int n;
|
||||
|
||||
n = snprintf(modname, sizeof(modname), "%s:%s", module, this_mod);
|
||||
|
||||
if (n > sizeof(modname) - 1)
|
||||
return false;
|
||||
|
||||
val = module_kallsyms_lookup_name(modname);
|
||||
return val != 0;
|
||||
}
|
||||
|
||||
static void trace_module_add_evals(struct module *mod)
|
||||
{
|
||||
if (!mod->num_trace_evals)
|
||||
|
@ -9926,7 +9873,7 @@ static void trace_module_remove_evals(struct module *mod)
|
|||
if (!mod->num_trace_evals)
|
||||
return;
|
||||
|
||||
mutex_lock(&trace_eval_mutex);
|
||||
guard(mutex)(&trace_eval_mutex);
|
||||
|
||||
map = trace_eval_maps;
|
||||
|
||||
|
@ -9938,12 +9885,10 @@ static void trace_module_remove_evals(struct module *mod)
|
|||
map = map->tail.next;
|
||||
}
|
||||
if (!map)
|
||||
goto out;
|
||||
return;
|
||||
|
||||
*last = trace_eval_jmp_to_tail(map)->tail.next;
|
||||
kfree(map);
|
||||
out:
|
||||
mutex_unlock(&trace_eval_mutex);
|
||||
}
|
||||
#else
|
||||
static inline void trace_module_remove_evals(struct module *mod) { }
|
||||
|
@ -10616,6 +10561,10 @@ __init static int tracer_alloc_buffers(void)
|
|||
#endif
|
||||
ftrace_init_global_array_ops(&global_trace);
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
INIT_LIST_HEAD(&global_trace.mod_events);
|
||||
#endif
|
||||
|
||||
init_trace_flags_index(&global_trace);
|
||||
|
||||
register_tracer(&nop_trace);
|
||||
|
|
|
@ -400,6 +400,9 @@ struct trace_array {
|
|||
cpumask_var_t pipe_cpumask;
|
||||
int ref;
|
||||
int trace_ref;
|
||||
#ifdef CONFIG_MODULES
|
||||
struct list_head mod_events;
|
||||
#endif
|
||||
#ifdef CONFIG_FUNCTION_TRACER
|
||||
struct ftrace_ops *ops;
|
||||
struct trace_pid_list __rcu *function_pids;
|
||||
|
@ -435,6 +438,15 @@ enum {
|
|||
TRACE_ARRAY_FL_MOD_INIT = BIT(2),
|
||||
};
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
bool module_exists(const char *module);
|
||||
#else
|
||||
static inline bool module_exists(const char *module)
|
||||
{
|
||||
return false;
|
||||
}
|
||||
#endif
|
||||
|
||||
extern struct list_head ftrace_trace_arrays;
|
||||
|
||||
extern struct mutex trace_types_lock;
|
||||
|
|
|
@ -74,24 +74,19 @@ int dyn_event_release(const char *raw_command, struct dyn_event_operations *type
|
|||
struct dyn_event *pos, *n;
|
||||
char *system = NULL, *event, *p;
|
||||
int argc, ret = -ENOENT;
|
||||
char **argv;
|
||||
char **argv __free(argv_free) = argv_split(GFP_KERNEL, raw_command, &argc);
|
||||
|
||||
argv = argv_split(GFP_KERNEL, raw_command, &argc);
|
||||
if (!argv)
|
||||
return -ENOMEM;
|
||||
|
||||
if (argv[0][0] == '-') {
|
||||
if (argv[0][1] != ':') {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (argv[0][1] != ':')
|
||||
return -EINVAL;
|
||||
event = &argv[0][2];
|
||||
} else {
|
||||
event = strchr(argv[0], ':');
|
||||
if (!event) {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (!event)
|
||||
return -EINVAL;
|
||||
event++;
|
||||
}
|
||||
|
||||
|
@ -101,10 +96,8 @@ int dyn_event_release(const char *raw_command, struct dyn_event_operations *type
|
|||
event = p + 1;
|
||||
*p = '\0';
|
||||
}
|
||||
if (!system && event[0] == '\0') {
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
}
|
||||
if (!system && event[0] == '\0')
|
||||
return -EINVAL;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
for_each_dyn_event_safe(pos, n) {
|
||||
|
@ -120,8 +113,6 @@ int dyn_event_release(const char *raw_command, struct dyn_event_operations *type
|
|||
}
|
||||
tracing_reset_all_online_cpus();
|
||||
mutex_unlock(&event_mutex);
|
||||
out:
|
||||
argv_free(argv);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
@ -869,6 +869,120 @@ static int ftrace_event_enable_disable(struct trace_event_file *file,
|
|||
return __ftrace_event_enable_disable(file, enable, 0);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
struct event_mod_load {
|
||||
struct list_head list;
|
||||
char *module;
|
||||
char *match;
|
||||
char *system;
|
||||
char *event;
|
||||
};
|
||||
|
||||
static void free_event_mod(struct event_mod_load *event_mod)
|
||||
{
|
||||
list_del(&event_mod->list);
|
||||
kfree(event_mod->module);
|
||||
kfree(event_mod->match);
|
||||
kfree(event_mod->system);
|
||||
kfree(event_mod->event);
|
||||
kfree(event_mod);
|
||||
}
|
||||
|
||||
static void clear_mod_events(struct trace_array *tr)
|
||||
{
|
||||
struct event_mod_load *event_mod, *n;
|
||||
|
||||
list_for_each_entry_safe(event_mod, n, &tr->mod_events, list) {
|
||||
free_event_mod(event_mod);
|
||||
}
|
||||
}
|
||||
|
||||
static int remove_cache_mod(struct trace_array *tr, const char *mod,
|
||||
const char *match, const char *system, const char *event)
|
||||
{
|
||||
struct event_mod_load *event_mod, *n;
|
||||
int ret = -EINVAL;
|
||||
|
||||
list_for_each_entry_safe(event_mod, n, &tr->mod_events, list) {
|
||||
if (strcmp(event_mod->module, mod) != 0)
|
||||
continue;
|
||||
|
||||
if (match && strcmp(event_mod->match, match) != 0)
|
||||
continue;
|
||||
|
||||
if (system &&
|
||||
(!event_mod->system || strcmp(event_mod->system, system) != 0))
|
||||
continue;
|
||||
|
||||
if (event &&
|
||||
(!event_mod->event || strcmp(event_mod->event, event) != 0))
|
||||
continue;
|
||||
|
||||
free_event_mod(event_mod);
|
||||
ret = 0;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int cache_mod(struct trace_array *tr, const char *mod, int set,
|
||||
const char *match, const char *system, const char *event)
|
||||
{
|
||||
struct event_mod_load *event_mod;
|
||||
|
||||
/* If the module exists, then this just failed to find an event */
|
||||
if (module_exists(mod))
|
||||
return -EINVAL;
|
||||
|
||||
/* See if this is to remove a cached filter */
|
||||
if (!set)
|
||||
return remove_cache_mod(tr, mod, match, system, event);
|
||||
|
||||
event_mod = kzalloc(sizeof(*event_mod), GFP_KERNEL);
|
||||
if (!event_mod)
|
||||
return -ENOMEM;
|
||||
|
||||
INIT_LIST_HEAD(&event_mod->list);
|
||||
event_mod->module = kstrdup(mod, GFP_KERNEL);
|
||||
if (!event_mod->module)
|
||||
goto out_free;
|
||||
|
||||
if (match) {
|
||||
event_mod->match = kstrdup(match, GFP_KERNEL);
|
||||
if (!event_mod->match)
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
if (system) {
|
||||
event_mod->system = kstrdup(system, GFP_KERNEL);
|
||||
if (!event_mod->system)
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
if (event) {
|
||||
event_mod->event = kstrdup(event, GFP_KERNEL);
|
||||
if (!event_mod->event)
|
||||
goto out_free;
|
||||
}
|
||||
|
||||
list_add(&event_mod->list, &tr->mod_events);
|
||||
|
||||
return 0;
|
||||
|
||||
out_free:
|
||||
free_event_mod(event_mod);
|
||||
|
||||
return -ENOMEM;
|
||||
}
|
||||
#else /* CONFIG_MODULES */
|
||||
static inline void clear_mod_events(struct trace_array *tr) { }
|
||||
static int cache_mod(struct trace_array *tr, const char *mod, int set,
|
||||
const char *match, const char *system, const char *event)
|
||||
{
|
||||
return -EINVAL;
|
||||
}
|
||||
#endif
|
||||
|
||||
static void ftrace_clear_events(struct trace_array *tr)
|
||||
{
|
||||
struct trace_event_file *file;
|
||||
|
@ -877,6 +991,7 @@ static void ftrace_clear_events(struct trace_array *tr)
|
|||
list_for_each_entry(file, &tr->events, list) {
|
||||
ftrace_event_enable_disable(file, 0);
|
||||
}
|
||||
clear_mod_events(tr);
|
||||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
|
@ -1165,17 +1280,36 @@ static void remove_event_file_dir(struct trace_event_file *file)
|
|||
*/
|
||||
static int
|
||||
__ftrace_set_clr_event_nolock(struct trace_array *tr, const char *match,
|
||||
const char *sub, const char *event, int set)
|
||||
const char *sub, const char *event, int set,
|
||||
const char *mod)
|
||||
{
|
||||
struct trace_event_file *file;
|
||||
struct trace_event_call *call;
|
||||
char *module __free(kfree) = NULL;
|
||||
const char *name;
|
||||
int ret = -EINVAL;
|
||||
int eret = 0;
|
||||
|
||||
if (mod) {
|
||||
char *p;
|
||||
|
||||
module = kstrdup(mod, GFP_KERNEL);
|
||||
if (!module)
|
||||
return -ENOMEM;
|
||||
|
||||
/* Replace all '-' with '_' as that's what modules do */
|
||||
for (p = strchr(module, '-'); p; p = strchr(p + 1, '-'))
|
||||
*p = '_';
|
||||
}
|
||||
|
||||
list_for_each_entry(file, &tr->events, list) {
|
||||
|
||||
call = file->event_call;
|
||||
|
||||
/* If a module is specified, skip events that are not that module */
|
||||
if (module && (!call->module || strcmp(module_name(call->module), module)))
|
||||
continue;
|
||||
|
||||
name = trace_event_name(call);
|
||||
|
||||
if (!name || !call->class || !call->class->reg)
|
||||
|
@ -1208,16 +1342,24 @@ __ftrace_set_clr_event_nolock(struct trace_array *tr, const char *match,
|
|||
ret = eret;
|
||||
}
|
||||
|
||||
/*
|
||||
* If this is a module setting and nothing was found,
|
||||
* check if the module was loaded. If it wasn't cache it.
|
||||
*/
|
||||
if (module && ret == -EINVAL && !eret)
|
||||
ret = cache_mod(tr, module, set, match, sub, event);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
|
||||
const char *sub, const char *event, int set)
|
||||
const char *sub, const char *event, int set,
|
||||
const char *mod)
|
||||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
ret = __ftrace_set_clr_event_nolock(tr, match, sub, event, set);
|
||||
ret = __ftrace_set_clr_event_nolock(tr, match, sub, event, set, mod);
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
|
@ -1225,11 +1367,20 @@ static int __ftrace_set_clr_event(struct trace_array *tr, const char *match,
|
|||
|
||||
int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
|
||||
{
|
||||
char *event = NULL, *sub = NULL, *match;
|
||||
char *event = NULL, *sub = NULL, *match, *mod;
|
||||
int ret;
|
||||
|
||||
if (!tr)
|
||||
return -ENOENT;
|
||||
|
||||
/* Modules events can be appened with :mod:<module> */
|
||||
mod = strstr(buf, ":mod:");
|
||||
if (mod) {
|
||||
*mod = '\0';
|
||||
/* move to the module name */
|
||||
mod += 5;
|
||||
}
|
||||
|
||||
/*
|
||||
* The buf format can be <subsystem>:<event-name>
|
||||
* *:<event-name> means any event by that name.
|
||||
|
@ -1252,9 +1403,13 @@ int ftrace_set_clr_event(struct trace_array *tr, char *buf, int set)
|
|||
sub = NULL;
|
||||
if (!strlen(event) || strcmp(event, "*") == 0)
|
||||
event = NULL;
|
||||
} else if (mod) {
|
||||
/* Allow wildcard for no length or star */
|
||||
if (!strlen(match) || strcmp(match, "*") == 0)
|
||||
match = NULL;
|
||||
}
|
||||
|
||||
ret = __ftrace_set_clr_event(tr, match, sub, event, set);
|
||||
ret = __ftrace_set_clr_event(tr, match, sub, event, set, mod);
|
||||
|
||||
/* Put back the colon to allow this to be called again */
|
||||
if (buf)
|
||||
|
@ -1282,7 +1437,7 @@ int trace_set_clr_event(const char *system, const char *event, int set)
|
|||
if (!tr)
|
||||
return -ENODEV;
|
||||
|
||||
return __ftrace_set_clr_event(tr, NULL, system, event, set);
|
||||
return __ftrace_set_clr_event(tr, NULL, system, event, set, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_set_clr_event);
|
||||
|
||||
|
@ -1308,7 +1463,7 @@ int trace_array_set_clr_event(struct trace_array *tr, const char *system,
|
|||
return -ENOENT;
|
||||
|
||||
set = (enable == true) ? 1 : 0;
|
||||
return __ftrace_set_clr_event(tr, NULL, system, event, set);
|
||||
return __ftrace_set_clr_event(tr, NULL, system, event, set, NULL);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_array_set_clr_event);
|
||||
|
||||
|
@ -1395,37 +1550,71 @@ static void *t_start(struct seq_file *m, loff_t *pos)
|
|||
return file;
|
||||
}
|
||||
|
||||
enum set_event_iter_type {
|
||||
SET_EVENT_FILE,
|
||||
SET_EVENT_MOD,
|
||||
};
|
||||
|
||||
struct set_event_iter {
|
||||
enum set_event_iter_type type;
|
||||
union {
|
||||
struct trace_event_file *file;
|
||||
struct event_mod_load *event_mod;
|
||||
};
|
||||
};
|
||||
|
||||
static void *
|
||||
s_next(struct seq_file *m, void *v, loff_t *pos)
|
||||
{
|
||||
struct trace_event_file *file = v;
|
||||
struct set_event_iter *iter = v;
|
||||
struct trace_event_file *file;
|
||||
struct trace_array *tr = m->private;
|
||||
|
||||
(*pos)++;
|
||||
|
||||
list_for_each_entry_continue(file, &tr->events, list) {
|
||||
if (file->flags & EVENT_FILE_FL_ENABLED)
|
||||
return file;
|
||||
if (iter->type == SET_EVENT_FILE) {
|
||||
file = iter->file;
|
||||
list_for_each_entry_continue(file, &tr->events, list) {
|
||||
if (file->flags & EVENT_FILE_FL_ENABLED) {
|
||||
iter->file = file;
|
||||
return iter;
|
||||
}
|
||||
}
|
||||
#ifdef CONFIG_MODULES
|
||||
iter->type = SET_EVENT_MOD;
|
||||
iter->event_mod = list_entry(&tr->mod_events, struct event_mod_load, list);
|
||||
#endif
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
list_for_each_entry_continue(iter->event_mod, &tr->mod_events, list)
|
||||
return iter;
|
||||
#endif
|
||||
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static void *s_start(struct seq_file *m, loff_t *pos)
|
||||
{
|
||||
struct trace_event_file *file;
|
||||
struct trace_array *tr = m->private;
|
||||
struct set_event_iter *iter;
|
||||
loff_t l;
|
||||
|
||||
iter = kzalloc(sizeof(*iter), GFP_KERNEL);
|
||||
if (!iter)
|
||||
return NULL;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
|
||||
file = list_entry(&tr->events, struct trace_event_file, list);
|
||||
iter->type = SET_EVENT_FILE;
|
||||
iter->file = list_entry(&tr->events, struct trace_event_file, list);
|
||||
|
||||
for (l = 0; l <= *pos; ) {
|
||||
file = s_next(m, file, &l);
|
||||
if (!file)
|
||||
iter = s_next(m, iter, &l);
|
||||
if (!iter)
|
||||
break;
|
||||
}
|
||||
return file;
|
||||
return iter;
|
||||
}
|
||||
|
||||
static int t_show(struct seq_file *m, void *v)
|
||||
|
@ -1445,6 +1634,45 @@ static void t_stop(struct seq_file *m, void *p)
|
|||
mutex_unlock(&event_mutex);
|
||||
}
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
static int s_show(struct seq_file *m, void *v)
|
||||
{
|
||||
struct set_event_iter *iter = v;
|
||||
const char *system;
|
||||
const char *event;
|
||||
|
||||
if (iter->type == SET_EVENT_FILE)
|
||||
return t_show(m, iter->file);
|
||||
|
||||
/* When match is set, system and event are not */
|
||||
if (iter->event_mod->match) {
|
||||
seq_printf(m, "%s:mod:%s\n", iter->event_mod->match,
|
||||
iter->event_mod->module);
|
||||
return 0;
|
||||
}
|
||||
|
||||
system = iter->event_mod->system ? : "*";
|
||||
event = iter->event_mod->event ? : "*";
|
||||
|
||||
seq_printf(m, "%s:%s:mod:%s\n", system, event, iter->event_mod->module);
|
||||
|
||||
return 0;
|
||||
}
|
||||
#else /* CONFIG_MODULES */
|
||||
static int s_show(struct seq_file *m, void *v)
|
||||
{
|
||||
struct set_event_iter *iter = v;
|
||||
|
||||
return t_show(m, iter->file);
|
||||
}
|
||||
#endif
|
||||
|
||||
static void s_stop(struct seq_file *m, void *p)
|
||||
{
|
||||
kfree(p);
|
||||
t_stop(m, NULL);
|
||||
}
|
||||
|
||||
static void *
|
||||
__next(struct seq_file *m, void *v, loff_t *pos, int type)
|
||||
{
|
||||
|
@ -1558,21 +1786,20 @@ event_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
switch (val) {
|
||||
case 0:
|
||||
case 1:
|
||||
ret = -ENODEV;
|
||||
mutex_lock(&event_mutex);
|
||||
file = event_file_file(filp);
|
||||
if (likely(file)) {
|
||||
ret = tracing_update_buffers(file->tr);
|
||||
if (ret < 0) {
|
||||
mutex_unlock(&event_mutex);
|
||||
return ret;
|
||||
}
|
||||
ret = ftrace_event_enable_disable(file, val);
|
||||
}
|
||||
mutex_unlock(&event_mutex);
|
||||
if (!file)
|
||||
return -ENODEV;
|
||||
ret = tracing_update_buffers(file->tr);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
ret = ftrace_event_enable_disable(file, val);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
break;
|
||||
|
||||
default:
|
||||
|
@ -1581,7 +1808,7 @@ event_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
|
|||
|
||||
*ppos += cnt;
|
||||
|
||||
return ret ? ret : cnt;
|
||||
return cnt;
|
||||
}
|
||||
|
||||
static ssize_t
|
||||
|
@ -1659,7 +1886,7 @@ system_enable_write(struct file *filp, const char __user *ubuf, size_t cnt,
|
|||
if (system)
|
||||
name = system->name;
|
||||
|
||||
ret = __ftrace_set_clr_event(dir->tr, NULL, name, NULL, val);
|
||||
ret = __ftrace_set_clr_event(dir->tr, NULL, name, NULL, val, NULL);
|
||||
if (ret)
|
||||
goto out;
|
||||
|
||||
|
@ -2157,7 +2384,7 @@ event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
if (type == TRACE_PIDS) {
|
||||
filtered_pids = rcu_dereference_protected(tr->filtered_pids,
|
||||
|
@ -2173,7 +2400,7 @@ event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
|
||||
ret = trace_pid_write(filtered_pids, &pid_list, ubuf, cnt);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
if (type == TRACE_PIDS)
|
||||
rcu_assign_pointer(tr->filtered_pids, pid_list);
|
||||
|
@ -2198,11 +2425,7 @@ event_pid_write(struct file *filp, const char __user *ubuf,
|
|||
*/
|
||||
on_each_cpu(ignore_task_cpu, tr, 1);
|
||||
|
||||
out:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
if (ret > 0)
|
||||
*ppos += ret;
|
||||
*ppos += ret;
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
@ -2237,8 +2460,8 @@ static const struct seq_operations show_event_seq_ops = {
|
|||
static const struct seq_operations show_set_event_seq_ops = {
|
||||
.start = s_start,
|
||||
.next = s_next,
|
||||
.show = t_show,
|
||||
.stop = t_stop,
|
||||
.show = s_show,
|
||||
.stop = s_stop,
|
||||
};
|
||||
|
||||
static const struct seq_operations show_set_pid_seq_ops = {
|
||||
|
@ -3111,6 +3334,20 @@ static bool event_in_systems(struct trace_event_call *call,
|
|||
return !*p || isspace(*p) || *p == ',';
|
||||
}
|
||||
|
||||
#ifdef CONFIG_HIST_TRIGGERS
|
||||
/*
|
||||
* Wake up waiter on the hist_poll_wq from irq_work because the hist trigger
|
||||
* may happen in any context.
|
||||
*/
|
||||
static void hist_poll_event_irq_work(struct irq_work *work)
|
||||
{
|
||||
wake_up_all(&hist_poll_wq);
|
||||
}
|
||||
|
||||
DEFINE_IRQ_WORK(hist_poll_work, hist_poll_event_irq_work);
|
||||
DECLARE_WAIT_QUEUE_HEAD(hist_poll_wq);
|
||||
#endif
|
||||
|
||||
static struct trace_event_file *
|
||||
trace_create_new_event(struct trace_event_call *call,
|
||||
struct trace_array *tr)
|
||||
|
@ -3269,13 +3506,13 @@ int trace_add_event_call(struct trace_event_call *call)
|
|||
int ret;
|
||||
lockdep_assert_held(&event_mutex);
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
ret = __register_event(call, NULL);
|
||||
if (ret >= 0)
|
||||
__add_event_to_tracers(call);
|
||||
if (ret < 0)
|
||||
return ret;
|
||||
|
||||
mutex_unlock(&trace_types_lock);
|
||||
__add_event_to_tracers(call);
|
||||
return ret;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_add_event_call);
|
||||
|
@ -3355,6 +3592,28 @@ EXPORT_SYMBOL_GPL(trace_remove_event_call);
|
|||
event++)
|
||||
|
||||
#ifdef CONFIG_MODULES
|
||||
static void update_mod_cache(struct trace_array *tr, struct module *mod)
|
||||
{
|
||||
struct event_mod_load *event_mod, *n;
|
||||
|
||||
list_for_each_entry_safe(event_mod, n, &tr->mod_events, list) {
|
||||
if (strcmp(event_mod->module, mod->name) != 0)
|
||||
continue;
|
||||
|
||||
__ftrace_set_clr_event_nolock(tr, event_mod->match,
|
||||
event_mod->system,
|
||||
event_mod->event, 1, mod->name);
|
||||
free_event_mod(event_mod);
|
||||
}
|
||||
}
|
||||
|
||||
static void update_cache_events(struct module *mod)
|
||||
{
|
||||
struct trace_array *tr;
|
||||
|
||||
list_for_each_entry(tr, &ftrace_trace_arrays, list)
|
||||
update_mod_cache(tr, mod);
|
||||
}
|
||||
|
||||
static void trace_module_add_events(struct module *mod)
|
||||
{
|
||||
|
@ -3377,6 +3636,8 @@ static void trace_module_add_events(struct module *mod)
|
|||
__register_event(*call, mod);
|
||||
__add_event_to_tracers(*call);
|
||||
}
|
||||
|
||||
update_cache_events(mod);
|
||||
}
|
||||
|
||||
static void trace_module_remove_events(struct module *mod)
|
||||
|
@ -3529,30 +3790,21 @@ struct trace_event_file *trace_get_event_file(const char *instance,
|
|||
return ERR_PTR(ret);
|
||||
}
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
file = find_event_file(tr, system, event);
|
||||
if (!file) {
|
||||
trace_array_put(tr);
|
||||
ret = -EINVAL;
|
||||
goto out;
|
||||
return ERR_PTR(-EINVAL);
|
||||
}
|
||||
|
||||
/* Don't let event modules unload while in use */
|
||||
ret = trace_event_try_get_ref(file->event_call);
|
||||
if (!ret) {
|
||||
trace_array_put(tr);
|
||||
ret = -EBUSY;
|
||||
goto out;
|
||||
return ERR_PTR(-EBUSY);
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
out:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
if (ret)
|
||||
file = ERR_PTR(ret);
|
||||
|
||||
return file;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(trace_get_event_file);
|
||||
|
@ -3770,6 +4022,7 @@ event_enable_func(struct trace_array *tr, struct ftrace_hash *hash,
|
|||
struct trace_event_file *file;
|
||||
struct ftrace_probe_ops *ops;
|
||||
struct event_probe_data *data;
|
||||
unsigned long count = -1;
|
||||
const char *system;
|
||||
const char *event;
|
||||
char *number;
|
||||
|
@ -3789,12 +4042,11 @@ event_enable_func(struct trace_array *tr, struct ftrace_hash *hash,
|
|||
|
||||
event = strsep(¶m, ":");
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
ret = -EINVAL;
|
||||
file = find_event_file(tr, system, event);
|
||||
if (!file)
|
||||
goto out;
|
||||
return -EINVAL;
|
||||
|
||||
enable = strcmp(cmd, ENABLE_EVENT_STR) == 0;
|
||||
|
||||
|
@ -3803,74 +4055,62 @@ event_enable_func(struct trace_array *tr, struct ftrace_hash *hash,
|
|||
else
|
||||
ops = param ? &event_disable_count_probe_ops : &event_disable_probe_ops;
|
||||
|
||||
if (glob[0] == '!') {
|
||||
ret = unregister_ftrace_function_probe_func(glob+1, tr, ops);
|
||||
goto out;
|
||||
if (glob[0] == '!')
|
||||
return unregister_ftrace_function_probe_func(glob+1, tr, ops);
|
||||
|
||||
if (param) {
|
||||
number = strsep(¶m, ":");
|
||||
|
||||
if (!strlen(number))
|
||||
return -EINVAL;
|
||||
|
||||
/*
|
||||
* We use the callback data field (which is a pointer)
|
||||
* as our counter.
|
||||
*/
|
||||
ret = kstrtoul(number, 0, &count);
|
||||
if (ret)
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = -ENOMEM;
|
||||
|
||||
data = kzalloc(sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
goto out;
|
||||
|
||||
data->enable = enable;
|
||||
data->count = -1;
|
||||
data->file = file;
|
||||
|
||||
if (!param)
|
||||
goto out_reg;
|
||||
|
||||
number = strsep(¶m, ":");
|
||||
|
||||
ret = -EINVAL;
|
||||
if (!strlen(number))
|
||||
goto out_free;
|
||||
|
||||
/*
|
||||
* We use the callback data field (which is a pointer)
|
||||
* as our counter.
|
||||
*/
|
||||
ret = kstrtoul(number, 0, &data->count);
|
||||
if (ret)
|
||||
goto out_free;
|
||||
|
||||
out_reg:
|
||||
/* Don't let event modules unload while probe registered */
|
||||
ret = trace_event_try_get_ref(file->event_call);
|
||||
if (!ret) {
|
||||
ret = -EBUSY;
|
||||
goto out_free;
|
||||
}
|
||||
if (!ret)
|
||||
return -EBUSY;
|
||||
|
||||
ret = __ftrace_event_enable_disable(file, 1, 1);
|
||||
if (ret < 0)
|
||||
goto out_put;
|
||||
|
||||
ret = -ENOMEM;
|
||||
data = kzalloc(sizeof(*data), GFP_KERNEL);
|
||||
if (!data)
|
||||
goto out_put;
|
||||
|
||||
data->enable = enable;
|
||||
data->count = count;
|
||||
data->file = file;
|
||||
|
||||
ret = register_ftrace_function_probe(glob, tr, ops, data);
|
||||
/*
|
||||
* The above returns on success the # of functions enabled,
|
||||
* but if it didn't find any functions it returns zero.
|
||||
* Consider no functions a failure too.
|
||||
*/
|
||||
if (!ret) {
|
||||
ret = -ENOENT;
|
||||
goto out_disable;
|
||||
} else if (ret < 0)
|
||||
goto out_disable;
|
||||
/* Just return zero, not the number of enabled functions */
|
||||
ret = 0;
|
||||
out:
|
||||
mutex_unlock(&event_mutex);
|
||||
return ret;
|
||||
|
||||
out_disable:
|
||||
/* Just return zero, not the number of enabled functions */
|
||||
if (ret > 0)
|
||||
return 0;
|
||||
|
||||
kfree(data);
|
||||
|
||||
if (!ret)
|
||||
ret = -ENOENT;
|
||||
|
||||
__ftrace_event_enable_disable(file, 0, 1);
|
||||
out_put:
|
||||
trace_event_put_ref(file->event_call);
|
||||
out_free:
|
||||
kfree(data);
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
|
||||
static struct ftrace_func_command event_enable_cmd = {
|
||||
|
@ -4093,20 +4333,17 @@ early_event_add_tracer(struct dentry *parent, struct trace_array *tr)
|
|||
{
|
||||
int ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
ret = create_event_toplevel_files(parent, tr);
|
||||
if (ret)
|
||||
goto out_unlock;
|
||||
return ret;
|
||||
|
||||
down_write(&trace_event_sem);
|
||||
__trace_early_add_event_dirs(tr);
|
||||
up_write(&trace_event_sem);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* Must be called with event_mutex held */
|
||||
|
@ -4121,7 +4358,7 @@ int event_trace_del_tracer(struct trace_array *tr)
|
|||
__ftrace_clear_event_pids(tr, TRACE_PIDS | TRACE_NO_PIDS);
|
||||
|
||||
/* Disable any running events */
|
||||
__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0);
|
||||
__ftrace_set_clr_event_nolock(tr, NULL, NULL, NULL, 0, NULL);
|
||||
|
||||
/* Make sure no more events are being executed */
|
||||
tracepoint_synchronize_unregister();
|
||||
|
@ -4405,7 +4642,7 @@ static __init void event_trace_self_tests(void)
|
|||
|
||||
pr_info("Testing event system %s: ", system->name);
|
||||
|
||||
ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 1);
|
||||
ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 1, NULL);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warn("error enabling system %s\n",
|
||||
system->name);
|
||||
|
@ -4414,7 +4651,7 @@ static __init void event_trace_self_tests(void)
|
|||
|
||||
event_test_stuff();
|
||||
|
||||
ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 0);
|
||||
ret = __ftrace_set_clr_event(tr, NULL, system->name, NULL, 0, NULL);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warn("error disabling system %s\n",
|
||||
system->name);
|
||||
|
@ -4429,7 +4666,7 @@ static __init void event_trace_self_tests(void)
|
|||
pr_info("Running tests on all trace events:\n");
|
||||
pr_info("Testing all events: ");
|
||||
|
||||
ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 1);
|
||||
ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 1, NULL);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warn("error enabling all events\n");
|
||||
return;
|
||||
|
@ -4438,7 +4675,7 @@ static __init void event_trace_self_tests(void)
|
|||
event_test_stuff();
|
||||
|
||||
/* reset sysname */
|
||||
ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 0);
|
||||
ret = __ftrace_set_clr_event(tr, NULL, NULL, NULL, 0, NULL);
|
||||
if (WARN_ON_ONCE(ret)) {
|
||||
pr_warn("error disabling all events\n");
|
||||
return;
|
||||
|
|
|
@ -2405,13 +2405,11 @@ int apply_subsystem_event_filter(struct trace_subsystem_dir *dir,
|
|||
struct event_filter *filter = NULL;
|
||||
int err = 0;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
/* Make sure the system still has events */
|
||||
if (!dir->nr_events) {
|
||||
err = -ENODEV;
|
||||
goto out_unlock;
|
||||
}
|
||||
if (!dir->nr_events)
|
||||
return -ENODEV;
|
||||
|
||||
if (!strcmp(strstrip(filter_string), "0")) {
|
||||
filter_free_subsystem_preds(dir, tr);
|
||||
|
@ -2422,7 +2420,7 @@ int apply_subsystem_event_filter(struct trace_subsystem_dir *dir,
|
|||
tracepoint_synchronize_unregister();
|
||||
filter_free_subsystem_filters(dir, tr);
|
||||
__free_filter(filter);
|
||||
goto out_unlock;
|
||||
return 0;
|
||||
}
|
||||
|
||||
err = create_system_filter(dir, filter_string, &filter);
|
||||
|
@ -2434,8 +2432,6 @@ int apply_subsystem_event_filter(struct trace_subsystem_dir *dir,
|
|||
__free_filter(system->filter);
|
||||
system->filter = filter;
|
||||
}
|
||||
out_unlock:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
@ -2612,17 +2608,15 @@ int ftrace_profile_set_filter(struct perf_event *event, int event_id,
|
|||
struct event_filter *filter = NULL;
|
||||
struct trace_event_call *call;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
call = event->tp_event;
|
||||
|
||||
err = -EINVAL;
|
||||
if (!call)
|
||||
goto out_unlock;
|
||||
return -EINVAL;
|
||||
|
||||
err = -EEXIST;
|
||||
if (event->filter)
|
||||
goto out_unlock;
|
||||
return -EEXIST;
|
||||
|
||||
err = create_filter(NULL, call, filter_str, false, &filter);
|
||||
if (err)
|
||||
|
@ -2637,9 +2631,6 @@ free_filter:
|
|||
if (err || ftrace_event_is_function(call))
|
||||
__free_filter(filter);
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
|
|
|
@ -5311,6 +5311,8 @@ static void event_hist_trigger(struct event_trigger_data *data,
|
|||
|
||||
if (resolve_var_refs(hist_data, key, var_ref_vals, true))
|
||||
hist_trigger_actions(hist_data, elt, buffer, rec, rbe, key, var_ref_vals);
|
||||
|
||||
hist_poll_wakeup();
|
||||
}
|
||||
|
||||
static void hist_trigger_stacktrace_print(struct seq_file *m,
|
||||
|
@ -5590,49 +5592,128 @@ static void hist_trigger_show(struct seq_file *m,
|
|||
n_entries, (u64)atomic64_read(&hist_data->map->drops));
|
||||
}
|
||||
|
||||
struct hist_file_data {
|
||||
struct file *file;
|
||||
u64 last_read;
|
||||
u64 last_act;
|
||||
};
|
||||
|
||||
static u64 get_hist_hit_count(struct trace_event_file *event_file)
|
||||
{
|
||||
struct hist_trigger_data *hist_data;
|
||||
struct event_trigger_data *data;
|
||||
u64 ret = 0;
|
||||
|
||||
list_for_each_entry(data, &event_file->triggers, list) {
|
||||
if (data->cmd_ops->trigger_type == ETT_EVENT_HIST) {
|
||||
hist_data = data->private_data;
|
||||
ret += atomic64_read(&hist_data->map->hits);
|
||||
}
|
||||
}
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int hist_show(struct seq_file *m, void *v)
|
||||
{
|
||||
struct hist_file_data *hist_file = m->private;
|
||||
struct event_trigger_data *data;
|
||||
struct trace_event_file *event_file;
|
||||
int n = 0, ret = 0;
|
||||
int n = 0;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
event_file = event_file_file(m->private);
|
||||
if (unlikely(!event_file)) {
|
||||
ret = -ENODEV;
|
||||
goto out_unlock;
|
||||
}
|
||||
event_file = event_file_file(hist_file->file);
|
||||
if (unlikely(!event_file))
|
||||
return -ENODEV;
|
||||
|
||||
list_for_each_entry(data, &event_file->triggers, list) {
|
||||
if (data->cmd_ops->trigger_type == ETT_EVENT_HIST)
|
||||
hist_trigger_show(m, data, n++);
|
||||
}
|
||||
hist_file->last_read = get_hist_hit_count(event_file);
|
||||
/*
|
||||
* Update last_act too so that poll()/POLLPRI can wait for the next
|
||||
* event after any syscall on hist file.
|
||||
*/
|
||||
hist_file->last_act = hist_file->last_read;
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&event_mutex);
|
||||
return 0;
|
||||
}
|
||||
|
||||
static __poll_t event_hist_poll(struct file *file, struct poll_table_struct *wait)
|
||||
{
|
||||
struct trace_event_file *event_file;
|
||||
struct seq_file *m = file->private_data;
|
||||
struct hist_file_data *hist_file = m->private;
|
||||
__poll_t ret = 0;
|
||||
u64 cnt;
|
||||
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
event_file = event_file_data(file);
|
||||
if (!event_file)
|
||||
return EPOLLERR;
|
||||
|
||||
hist_poll_wait(file, wait);
|
||||
|
||||
cnt = get_hist_hit_count(event_file);
|
||||
if (hist_file->last_read != cnt)
|
||||
ret |= EPOLLIN | EPOLLRDNORM;
|
||||
if (hist_file->last_act != cnt) {
|
||||
hist_file->last_act = cnt;
|
||||
ret |= EPOLLPRI;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int event_hist_release(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct seq_file *m = file->private_data;
|
||||
struct hist_file_data *hist_file = m->private;
|
||||
|
||||
kfree(hist_file);
|
||||
return tracing_single_release_file_tr(inode, file);
|
||||
}
|
||||
|
||||
static int event_hist_open(struct inode *inode, struct file *file)
|
||||
{
|
||||
struct trace_event_file *event_file;
|
||||
struct hist_file_data *hist_file;
|
||||
int ret;
|
||||
|
||||
ret = tracing_open_file_tr(inode, file);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
event_file = event_file_data(file);
|
||||
if (!event_file)
|
||||
return -ENODEV;
|
||||
|
||||
hist_file = kzalloc(sizeof(*hist_file), GFP_KERNEL);
|
||||
if (!hist_file)
|
||||
return -ENOMEM;
|
||||
|
||||
hist_file->file = file;
|
||||
hist_file->last_act = get_hist_hit_count(event_file);
|
||||
|
||||
/* Clear private_data to avoid warning in single_open() */
|
||||
file->private_data = NULL;
|
||||
return single_open(file, hist_show, file);
|
||||
ret = single_open(file, hist_show, hist_file);
|
||||
if (ret)
|
||||
kfree(hist_file);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
const struct file_operations event_hist_fops = {
|
||||
.open = event_hist_open,
|
||||
.read = seq_read,
|
||||
.llseek = seq_lseek,
|
||||
.release = tracing_single_release_file_tr,
|
||||
.release = event_hist_release,
|
||||
.poll = event_hist_poll,
|
||||
};
|
||||
|
||||
#ifdef CONFIG_HIST_TRIGGERS_DEBUG
|
||||
|
@ -5873,25 +5954,19 @@ static int hist_debug_show(struct seq_file *m, void *v)
|
|||
{
|
||||
struct event_trigger_data *data;
|
||||
struct trace_event_file *event_file;
|
||||
int n = 0, ret = 0;
|
||||
int n = 0;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
event_file = event_file_file(m->private);
|
||||
if (unlikely(!event_file)) {
|
||||
ret = -ENODEV;
|
||||
goto out_unlock;
|
||||
}
|
||||
if (unlikely(!event_file))
|
||||
return -ENODEV;
|
||||
|
||||
list_for_each_entry(data, &event_file->triggers, list) {
|
||||
if (data->cmd_ops->trigger_type == ETT_EVENT_HIST)
|
||||
hist_trigger_debug_show(m, data, n++);
|
||||
}
|
||||
|
||||
out_unlock:
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int event_hist_debug_open(struct inode *inode, struct file *file)
|
||||
|
|
|
@ -49,16 +49,11 @@ static char *last_cmd;
|
|||
|
||||
static int errpos(const char *str)
|
||||
{
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&lastcmd_mutex);
|
||||
guard(mutex)(&lastcmd_mutex);
|
||||
if (!str || !last_cmd)
|
||||
goto out;
|
||||
return 0;
|
||||
|
||||
ret = err_pos(last_cmd, str);
|
||||
out:
|
||||
mutex_unlock(&lastcmd_mutex);
|
||||
return ret;
|
||||
return err_pos(last_cmd, str);
|
||||
}
|
||||
|
||||
static void last_cmd_set(const char *str)
|
||||
|
@ -74,14 +69,12 @@ static void last_cmd_set(const char *str)
|
|||
|
||||
static void synth_err(u8 err_type, u16 err_pos)
|
||||
{
|
||||
mutex_lock(&lastcmd_mutex);
|
||||
guard(mutex)(&lastcmd_mutex);
|
||||
if (!last_cmd)
|
||||
goto out;
|
||||
return;
|
||||
|
||||
tracing_log_err(NULL, "synthetic_events", last_cmd, err_text,
|
||||
err_type, err_pos);
|
||||
out:
|
||||
mutex_unlock(&lastcmd_mutex);
|
||||
}
|
||||
|
||||
static int create_synth_event(const char *raw_command);
|
||||
|
|
|
@ -211,12 +211,10 @@ static int event_trigger_regex_open(struct inode *inode, struct file *file)
|
|||
if (ret)
|
||||
return ret;
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
if (unlikely(!event_file_file(file))) {
|
||||
mutex_unlock(&event_mutex);
|
||||
if (unlikely(!event_file_file(file)))
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
if ((file->f_mode & FMODE_WRITE) &&
|
||||
(file->f_flags & O_TRUNC)) {
|
||||
|
@ -239,8 +237,6 @@ static int event_trigger_regex_open(struct inode *inode, struct file *file)
|
|||
}
|
||||
}
|
||||
|
||||
mutex_unlock(&event_mutex);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -248,7 +244,6 @@ int trigger_process_regex(struct trace_event_file *file, char *buff)
|
|||
{
|
||||
char *command, *next;
|
||||
struct event_command *p;
|
||||
int ret = -EINVAL;
|
||||
|
||||
next = buff = skip_spaces(buff);
|
||||
command = strsep(&next, ": \t");
|
||||
|
@ -259,17 +254,14 @@ int trigger_process_regex(struct trace_event_file *file, char *buff)
|
|||
}
|
||||
command = (command[0] != '!') ? command : command + 1;
|
||||
|
||||
mutex_lock(&trigger_cmd_mutex);
|
||||
list_for_each_entry(p, &trigger_commands, list) {
|
||||
if (strcmp(p->name, command) == 0) {
|
||||
ret = p->parse(p, file, buff, command, next);
|
||||
goto out_unlock;
|
||||
}
|
||||
}
|
||||
out_unlock:
|
||||
mutex_unlock(&trigger_cmd_mutex);
|
||||
guard(mutex)(&trigger_cmd_mutex);
|
||||
|
||||
return ret;
|
||||
list_for_each_entry(p, &trigger_commands, list) {
|
||||
if (strcmp(p->name, command) == 0)
|
||||
return p->parse(p, file, buff, command, next);
|
||||
}
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
static ssize_t event_trigger_regex_write(struct file *file,
|
||||
|
@ -278,7 +270,7 @@ static ssize_t event_trigger_regex_write(struct file *file,
|
|||
{
|
||||
struct trace_event_file *event_file;
|
||||
ssize_t ret;
|
||||
char *buf;
|
||||
char *buf __free(kfree) = NULL;
|
||||
|
||||
if (!cnt)
|
||||
return 0;
|
||||
|
@ -292,24 +284,18 @@ static ssize_t event_trigger_regex_write(struct file *file,
|
|||
|
||||
strim(buf);
|
||||
|
||||
mutex_lock(&event_mutex);
|
||||
event_file = event_file_file(file);
|
||||
if (unlikely(!event_file)) {
|
||||
mutex_unlock(&event_mutex);
|
||||
kfree(buf);
|
||||
return -ENODEV;
|
||||
}
|
||||
ret = trigger_process_regex(event_file, buf);
|
||||
mutex_unlock(&event_mutex);
|
||||
guard(mutex)(&event_mutex);
|
||||
|
||||
kfree(buf);
|
||||
event_file = event_file_file(file);
|
||||
if (unlikely(!event_file))
|
||||
return -ENODEV;
|
||||
|
||||
ret = trigger_process_regex(event_file, buf);
|
||||
if (ret < 0)
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
*ppos += cnt;
|
||||
ret = cnt;
|
||||
out:
|
||||
return ret;
|
||||
return cnt;
|
||||
}
|
||||
|
||||
static int event_trigger_regex_release(struct inode *inode, struct file *file)
|
||||
|
@ -359,20 +345,16 @@ const struct file_operations event_trigger_fops = {
|
|||
__init int register_event_command(struct event_command *cmd)
|
||||
{
|
||||
struct event_command *p;
|
||||
int ret = 0;
|
||||
|
||||
mutex_lock(&trigger_cmd_mutex);
|
||||
guard(mutex)(&trigger_cmd_mutex);
|
||||
|
||||
list_for_each_entry(p, &trigger_commands, list) {
|
||||
if (strcmp(cmd->name, p->name) == 0) {
|
||||
ret = -EBUSY;
|
||||
goto out_unlock;
|
||||
}
|
||||
if (strcmp(cmd->name, p->name) == 0)
|
||||
return -EBUSY;
|
||||
}
|
||||
list_add(&cmd->list, &trigger_commands);
|
||||
out_unlock:
|
||||
mutex_unlock(&trigger_cmd_mutex);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -382,20 +364,17 @@ __init int register_event_command(struct event_command *cmd)
|
|||
__init int unregister_event_command(struct event_command *cmd)
|
||||
{
|
||||
struct event_command *p, *n;
|
||||
int ret = -ENODEV;
|
||||
|
||||
mutex_lock(&trigger_cmd_mutex);
|
||||
guard(mutex)(&trigger_cmd_mutex);
|
||||
|
||||
list_for_each_entry_safe(p, n, &trigger_commands, list) {
|
||||
if (strcmp(cmd->name, p->name) == 0) {
|
||||
ret = 0;
|
||||
list_del_init(&p->list);
|
||||
goto out_unlock;
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
out_unlock:
|
||||
mutex_unlock(&trigger_cmd_mutex);
|
||||
|
||||
return ret;
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
/**
|
||||
|
|
|
@ -2083,26 +2083,21 @@ static void osnoise_hotplug_workfn(struct work_struct *dummy)
|
|||
{
|
||||
unsigned int cpu = smp_processor_id();
|
||||
|
||||
mutex_lock(&trace_types_lock);
|
||||
guard(mutex)(&trace_types_lock);
|
||||
|
||||
if (!osnoise_has_registered_instances())
|
||||
goto out_unlock_trace;
|
||||
return;
|
||||
|
||||
mutex_lock(&interface_lock);
|
||||
cpus_read_lock();
|
||||
guard(mutex)(&interface_lock);
|
||||
guard(cpus_read_lock)();
|
||||
|
||||
if (!cpu_online(cpu))
|
||||
goto out_unlock;
|
||||
return;
|
||||
|
||||
if (!cpumask_test_cpu(cpu, &osnoise_cpumask))
|
||||
goto out_unlock;
|
||||
return;
|
||||
|
||||
start_kthread(cpu);
|
||||
|
||||
out_unlock:
|
||||
cpus_read_unlock();
|
||||
mutex_unlock(&interface_lock);
|
||||
out_unlock_trace:
|
||||
mutex_unlock(&trace_types_lock);
|
||||
}
|
||||
|
||||
static DECLARE_WORK(osnoise_hotplug_work, osnoise_hotplug_workfn);
|
||||
|
@ -2300,31 +2295,22 @@ static ssize_t
|
|||
osnoise_cpus_read(struct file *filp, char __user *ubuf, size_t count,
|
||||
loff_t *ppos)
|
||||
{
|
||||
char *mask_str;
|
||||
char *mask_str __free(kfree) = NULL;
|
||||
int len;
|
||||
|
||||
mutex_lock(&interface_lock);
|
||||
guard(mutex)(&interface_lock);
|
||||
|
||||
len = snprintf(NULL, 0, "%*pbl\n", cpumask_pr_args(&osnoise_cpumask)) + 1;
|
||||
mask_str = kmalloc(len, GFP_KERNEL);
|
||||
if (!mask_str) {
|
||||
count = -ENOMEM;
|
||||
goto out_unlock;
|
||||
}
|
||||
if (!mask_str)
|
||||
return -ENOMEM;
|
||||
|
||||
len = snprintf(mask_str, len, "%*pbl\n", cpumask_pr_args(&osnoise_cpumask));
|
||||
if (len >= count) {
|
||||
count = -EINVAL;
|
||||
goto out_free;
|
||||
}
|
||||
if (len >= count)
|
||||
return -EINVAL;
|
||||
|
||||
count = simple_read_from_buffer(ubuf, count, ppos, mask_str, len);
|
||||
|
||||
out_free:
|
||||
kfree(mask_str);
|
||||
out_unlock:
|
||||
mutex_unlock(&interface_lock);
|
||||
|
||||
return count;
|
||||
}
|
||||
|
||||
|
|
|
@ -520,20 +520,18 @@ stack_trace_sysctl(const struct ctl_table *table, int write, void *buffer,
|
|||
int was_enabled;
|
||||
int ret;
|
||||
|
||||
mutex_lock(&stack_sysctl_mutex);
|
||||
guard(mutex)(&stack_sysctl_mutex);
|
||||
was_enabled = !!stack_tracer_enabled;
|
||||
|
||||
ret = proc_dointvec(table, write, buffer, lenp, ppos);
|
||||
|
||||
if (ret || !write || (was_enabled == !!stack_tracer_enabled))
|
||||
goto out;
|
||||
return ret;
|
||||
|
||||
if (stack_tracer_enabled)
|
||||
register_ftrace_function(&trace_ops);
|
||||
else
|
||||
unregister_ftrace_function(&trace_ops);
|
||||
out:
|
||||
mutex_unlock(&stack_sysctl_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
|
|
@ -128,7 +128,7 @@ static int stat_seq_init(struct stat_session *session)
|
|||
int ret = 0;
|
||||
int i;
|
||||
|
||||
mutex_lock(&session->stat_mutex);
|
||||
guard(mutex)(&session->stat_mutex);
|
||||
__reset_stat_session(session);
|
||||
|
||||
if (!ts->stat_cmp)
|
||||
|
@ -136,11 +136,11 @@ static int stat_seq_init(struct stat_session *session)
|
|||
|
||||
stat = ts->stat_start(ts);
|
||||
if (!stat)
|
||||
goto exit;
|
||||
return 0;
|
||||
|
||||
ret = insert_stat(root, stat, ts->stat_cmp);
|
||||
if (ret)
|
||||
goto exit;
|
||||
return ret;
|
||||
|
||||
/*
|
||||
* Iterate over the tracer stat entries and store them in an rbtree.
|
||||
|
@ -157,13 +157,10 @@ static int stat_seq_init(struct stat_session *session)
|
|||
goto exit_free_rbtree;
|
||||
}
|
||||
|
||||
exit:
|
||||
mutex_unlock(&session->stat_mutex);
|
||||
return ret;
|
||||
|
||||
exit_free_rbtree:
|
||||
__reset_stat_session(session);
|
||||
mutex_unlock(&session->stat_mutex);
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -308,7 +305,7 @@ static int init_stat_file(struct stat_session *session)
|
|||
int register_stat_tracer(struct tracer_stat *trace)
|
||||
{
|
||||
struct stat_session *session, *node;
|
||||
int ret = -EINVAL;
|
||||
int ret;
|
||||
|
||||
if (!trace)
|
||||
return -EINVAL;
|
||||
|
@ -316,18 +313,18 @@ int register_stat_tracer(struct tracer_stat *trace)
|
|||
if (!trace->stat_start || !trace->stat_next || !trace->stat_show)
|
||||
return -EINVAL;
|
||||
|
||||
guard(mutex)(&all_stat_sessions_mutex);
|
||||
|
||||
/* Already registered? */
|
||||
mutex_lock(&all_stat_sessions_mutex);
|
||||
list_for_each_entry(node, &all_stat_sessions, session_list) {
|
||||
if (node->ts == trace)
|
||||
goto out;
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
ret = -ENOMEM;
|
||||
/* Init the session */
|
||||
session = kzalloc(sizeof(*session), GFP_KERNEL);
|
||||
if (!session)
|
||||
goto out;
|
||||
return -ENOMEM;
|
||||
|
||||
session->ts = trace;
|
||||
INIT_LIST_HEAD(&session->session_list);
|
||||
|
@ -336,16 +333,13 @@ int register_stat_tracer(struct tracer_stat *trace)
|
|||
ret = init_stat_file(session);
|
||||
if (ret) {
|
||||
destroy_session(session);
|
||||
goto out;
|
||||
return ret;
|
||||
}
|
||||
|
||||
ret = 0;
|
||||
/* Register */
|
||||
list_add_tail(&session->session_list, &all_stat_sessions);
|
||||
out:
|
||||
mutex_unlock(&all_stat_sessions_mutex);
|
||||
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
void unregister_stat_tracer(struct tracer_stat *trace)
|
||||
|
|
|
@ -6,4 +6,6 @@ TEST_PROGS := ftracetest-ktap
|
|||
TEST_FILES := test.d settings
|
||||
EXTRA_CLEAN := $(OUTPUT)/logs/*
|
||||
|
||||
TEST_GEN_PROGS = poll
|
||||
|
||||
include ../lib.mk
|
||||
|
|
74
tools/testing/selftests/ftrace/poll.c
Normal file
74
tools/testing/selftests/ftrace/poll.c
Normal file
|
@ -0,0 +1,74 @@
|
|||
// SPDX-License-Identifier: GPL-2.0
|
||||
/*
|
||||
* Simple poll on a file.
|
||||
*
|
||||
* Copyright (c) 2024 Google LLC.
|
||||
*/
|
||||
|
||||
#include <errno.h>
|
||||
#include <fcntl.h>
|
||||
#include <poll.h>
|
||||
#include <stdio.h>
|
||||
#include <stdlib.h>
|
||||
#include <string.h>
|
||||
#include <unistd.h>
|
||||
|
||||
#define BUFSIZE 4096
|
||||
|
||||
/*
|
||||
* Usage:
|
||||
* poll [-I|-P] [-t timeout] FILE
|
||||
*/
|
||||
int main(int argc, char *argv[])
|
||||
{
|
||||
struct pollfd pfd = {.events = POLLIN};
|
||||
char buf[BUFSIZE];
|
||||
int timeout = -1;
|
||||
int ret, opt;
|
||||
|
||||
while ((opt = getopt(argc, argv, "IPt:")) != -1) {
|
||||
switch (opt) {
|
||||
case 'I':
|
||||
pfd.events = POLLIN;
|
||||
break;
|
||||
case 'P':
|
||||
pfd.events = POLLPRI;
|
||||
break;
|
||||
case 't':
|
||||
timeout = atoi(optarg);
|
||||
break;
|
||||
default:
|
||||
fprintf(stderr, "Usage: %s [-I|-P] [-t timeout] FILE\n",
|
||||
argv[0]);
|
||||
return -1;
|
||||
}
|
||||
}
|
||||
if (optind >= argc) {
|
||||
fprintf(stderr, "Error: Polling file is not specified\n");
|
||||
return -1;
|
||||
}
|
||||
|
||||
pfd.fd = open(argv[optind], O_RDONLY);
|
||||
if (pfd.fd < 0) {
|
||||
fprintf(stderr, "failed to open %s", argv[optind]);
|
||||
perror("open");
|
||||
return -1;
|
||||
}
|
||||
|
||||
/* Reset poll by read if POLLIN is specified. */
|
||||
if (pfd.events & POLLIN)
|
||||
do {} while (read(pfd.fd, buf, BUFSIZE) == BUFSIZE);
|
||||
|
||||
ret = poll(&pfd, 1, timeout);
|
||||
if (ret < 0 && errno != EINTR) {
|
||||
perror("poll");
|
||||
return -1;
|
||||
}
|
||||
close(pfd.fd);
|
||||
|
||||
/* If timeout happned (ret == 0), exit code is 1 */
|
||||
if (ret == 0)
|
||||
return 1;
|
||||
|
||||
return 0;
|
||||
}
|
191
tools/testing/selftests/ftrace/test.d/event/event-mod.tc
Normal file
191
tools/testing/selftests/ftrace/test.d/event/event-mod.tc
Normal file
|
@ -0,0 +1,191 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# description: event tracing - enable/disable with module event
|
||||
# requires: set_event "Can enable module events via: :mod:":README
|
||||
# flags: instance
|
||||
|
||||
rmmod trace-events-sample ||:
|
||||
if ! modprobe trace-events-sample ; then
|
||||
echo "No trace-events sample module - please make CONFIG_SAMPLE_TRACE_EVENTS=m"
|
||||
exit_unresolved;
|
||||
fi
|
||||
trap "rmmod trace-events-sample" EXIT
|
||||
|
||||
# Set events for the module
|
||||
echo ":mod:trace-events-sample" > set_event
|
||||
|
||||
test_all_enabled() {
|
||||
|
||||
# Check if more than one is enabled
|
||||
grep -q sample-trace:foo_bar set_event
|
||||
grep -q sample-trace:foo_bar_with_cond set_event
|
||||
grep -q sample-trace:foo_bar_with_fn set_event
|
||||
|
||||
# All of them should be enabled. Check via the enable file
|
||||
val=`cat events/sample-trace/enable`
|
||||
if [ $val -ne 1 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
}
|
||||
|
||||
clear_events() {
|
||||
echo > set_event
|
||||
val=`cat events/enable`
|
||||
if [ "$val" != "0" ]; then
|
||||
exit_fail
|
||||
fi
|
||||
count=`cat set_event | wc -l`
|
||||
if [ $count -ne 0 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
}
|
||||
|
||||
test_all_enabled
|
||||
|
||||
echo clear all events
|
||||
echo 0 > events/enable
|
||||
|
||||
echo Confirm the events are disabled
|
||||
val=`cat events/sample-trace/enable`
|
||||
if [ $val -ne 0 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo And the set_event file is empty
|
||||
|
||||
cnt=`wc -l set_event`
|
||||
if [ $cnt -ne 0 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo now enable all events
|
||||
echo 1 > events/enable
|
||||
|
||||
echo Confirm the events are enabled again
|
||||
val=`cat events/sample-trace/enable`
|
||||
if [ $val -ne 1 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo disable just the module events
|
||||
echo '!:mod:trace-events-sample' >> set_event
|
||||
|
||||
echo Should have mix of events enabled
|
||||
val=`cat events/enable`
|
||||
if [ "$val" != "X" ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo Confirm the module events are disabled
|
||||
val=`cat events/sample-trace/enable`
|
||||
if [ $val -ne 0 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo 0 > events/enable
|
||||
|
||||
echo now enable the system events
|
||||
echo 'sample-trace:mod:trace-events-sample' > set_event
|
||||
|
||||
test_all_enabled
|
||||
|
||||
echo clear all events
|
||||
echo 0 > events/enable
|
||||
|
||||
echo Confirm the events are disabled
|
||||
val=`cat events/sample-trace/enable`
|
||||
if [ $val -ne 0 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo Test enabling foo_bar only
|
||||
echo 'foo_bar:mod:trace-events-sample' > set_event
|
||||
|
||||
grep -q sample-trace:foo_bar set_event
|
||||
|
||||
echo make sure nothing is found besides foo_bar
|
||||
if grep -q -v sample-trace:foo_bar set_event ; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo Append another using the system and event name
|
||||
echo 'sample-trace:foo_bar_with_cond:mod:trace-events-sample' >> set_event
|
||||
|
||||
grep -q sample-trace:foo_bar set_event
|
||||
grep -q sample-trace:foo_bar_with_cond set_event
|
||||
|
||||
count=`cat set_event | wc -l`
|
||||
|
||||
if [ $count -ne 2 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
clear_events
|
||||
|
||||
rmmod trace-events-sample
|
||||
|
||||
echo ':mod:trace-events-sample' > set_event
|
||||
|
||||
echo make sure that the module shows up, and '-' is converted to '_'
|
||||
grep -q '\*:\*:mod:trace_events_sample' set_event
|
||||
|
||||
modprobe trace-events-sample
|
||||
|
||||
test_all_enabled
|
||||
|
||||
clear_events
|
||||
|
||||
rmmod trace-events-sample
|
||||
|
||||
echo Enable just the system events
|
||||
echo 'sample-trace:mod:trace-events-sample' > set_event
|
||||
grep -q 'sample-trace:mod:trace_events_sample' set_event
|
||||
|
||||
modprobe trace-events-sample
|
||||
|
||||
test_all_enabled
|
||||
|
||||
clear_events
|
||||
|
||||
rmmod trace-events-sample
|
||||
|
||||
echo Enable event with just event name
|
||||
echo 'foo_bar:mod:trace-events-sample' > set_event
|
||||
grep -q 'foo_bar:mod:trace_events_sample' set_event
|
||||
|
||||
echo Enable another event with both system and event name
|
||||
echo 'sample-trace:foo_bar_with_cond:mod:trace-events-sample' >> set_event
|
||||
grep -q 'sample-trace:foo_bar_with_cond:mod:trace_events_sample' set_event
|
||||
echo Make sure the other event was still there
|
||||
grep -q 'foo_bar:mod:trace_events_sample' set_event
|
||||
|
||||
modprobe trace-events-sample
|
||||
|
||||
echo There should be no :mod: cached events
|
||||
if grep -q ':mod:' set_event; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo two events should be enabled
|
||||
count=`cat set_event | wc -l`
|
||||
if [ $count -ne 2 ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
echo only two events should be enabled
|
||||
val=`cat events/sample-trace/enable`
|
||||
if [ "$val" != "X" ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
val=`cat events/sample-trace/foo_bar/enable`
|
||||
if [ "$val" != "1" ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
val=`cat events/sample-trace/foo_bar_with_cond/enable`
|
||||
if [ "$val" != "1" ]; then
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
clear_trace
|
|
@ -0,0 +1,74 @@
|
|||
#!/bin/sh
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
# description: event trigger - test poll wait on histogram
|
||||
# requires: set_event events/sched/sched_process_free/trigger events/sched/sched_process_free/hist
|
||||
# flags: instance
|
||||
|
||||
POLL=${FTRACETEST_ROOT}/poll
|
||||
|
||||
if [ ! -x ${POLL} ]; then
|
||||
echo "poll program is not compiled!"
|
||||
exit_unresolved
|
||||
fi
|
||||
|
||||
EVENT=events/sched/sched_process_free/
|
||||
|
||||
# Check poll ops is supported. Before implementing poll on hist file, it
|
||||
# returns soon with POLLIN | POLLOUT, but not POLLPRI.
|
||||
|
||||
# This must wait >1 sec and return 1 (timeout).
|
||||
set +e
|
||||
${POLL} -I -t 1000 ${EVENT}/hist
|
||||
ret=$?
|
||||
set -e
|
||||
if [ ${ret} != 1 ]; then
|
||||
echo "poll on hist file is not supported"
|
||||
exit_unsupported
|
||||
fi
|
||||
|
||||
# Test POLLIN
|
||||
echo > trace
|
||||
echo 'hist:key=comm if comm =="sleep"' > ${EVENT}/trigger
|
||||
echo 1 > ${EVENT}/enable
|
||||
|
||||
# This sleep command will exit after 2 seconds.
|
||||
sleep 2 &
|
||||
BGPID=$!
|
||||
# if timeout happens, poll returns 1.
|
||||
${POLL} -I -t 4000 ${EVENT}/hist
|
||||
echo 0 > tracing_on
|
||||
|
||||
if [ -d /proc/${BGPID} ]; then
|
||||
echo "poll exits too soon"
|
||||
kill -KILL ${BGPID} ||:
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
if ! grep -qw "sleep" trace; then
|
||||
echo "poll exits before event happens"
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
# Test POLLPRI
|
||||
echo > trace
|
||||
echo 1 > tracing_on
|
||||
|
||||
# This sleep command will exit after 2 seconds.
|
||||
sleep 2 &
|
||||
BGPID=$!
|
||||
# if timeout happens, poll returns 1.
|
||||
${POLL} -P -t 4000 ${EVENT}/hist
|
||||
echo 0 > tracing_on
|
||||
|
||||
if [ -d /proc/${BGPID} ]; then
|
||||
echo "poll exits too soon"
|
||||
kill -KILL ${BGPID} ||:
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
if ! grep -qw "sleep" trace; then
|
||||
echo "poll exits before event happens"
|
||||
exit_fail
|
||||
fi
|
||||
|
||||
exit_pass
|
Loading…
Add table
Reference in a new issue