The ASIT Toolbox
This page presents the tools explored and developed during the ASIT project.
Each section below includes:
If you're interested in learning more or collaborating, feel free to reach out directly to the listed organizations.
We hope you find these insights useful and inspiring!
Falkor is the next-generation platform for data-driven investigations. We are leading the way in flexible and intuitive data fusion and case management with powerful data enrichment to help solve the challenges faced by analysts in law enforcement, cyber threat intelligence, and other agencies globally. Our system combines sophisticated AI innovation with human experience and intuition. Integrate any data, including internal databases, files, and OSINT. Visualize entities, events, and relations in maps, link analysis, timelines, and specialized dashboards. Organize your team's work in cases, generate automatic reports, and share insights with colleagues and decision makers. Collaborate securely with iron-clad, permission-based access and crack the case together. Falkor offers scalability, catering to the needs of both small teams and large organizations.
More info:
email: hello@falkor.ai website: Falkor.ai
HEROES Tools:
More info:
Budi Arief - b.arief@kent.ac.uk.
More info:
Francois Bremond - francois.bremond@inria.fr, Luis Javier García Villalba - javierv@ucm.es.
More info:
Fran Casino - fran.casino@gmail.com.
ALUNA Tools:
More info:
Luis Javier García Villalba - javierv@ucm.es.
More info:
Luis Javier García Villalba - javierv@ucm.es.
More info:
Virginia Franqueira - v.franqueira@kent.ac.uk.
More info:
Pablo Gallegos - pablo.gallegos@idener.ai.
More info:
Luis Javier García Villalba - javierv@ucm.es.
More info:
Virginia Franqueira - v.franqueira@kent.ac.uk.
More info:
Fran Casino - fran.casino@gmail.com.
More info:
Fran Casino - fran.casino@gmail.com.
More info:
Virginia Franqueira - v.franqueira@kent.ac.uk.
More info:
Luis Javier García Villalba - javierv@ucm.es.
CESAGRAM AI-Based Solution
The CESAGRAM AI-based solution aims to enhance the capacity of law enforcement agencies to detect, prevent, and respond to online grooming activities across the Web and social media platforms. This solution integrates three core components into a user-friendly platform:
1. Online Data Gathering
2. Linguistic Analysis
3. Risk Assessment
1. Online Data Gathering
The CESAGRAM solution incorporates three specialised crawlers designed to gather textual content from online sources: a Web crawler and two social media crawlers for Twitch and YouTube, respectively. The Web crawler has been designed to systematically collect data from both the Surface and Dark Web. The Twitch crawler enables both synchronous and asynchronous monitoring of chatlogs associated with live video streams on the platform. It leverages the official Twitch API to collect chat utterances in real time from multiple user accounts involved in discussions related to particular gaming video streams of interest. Finally, the YouTube crawler is designed to monitor YouTube and extract comments associated with videos uploaded to the platform using the YouTube Data API.
2. Linguistic Analysis
The Linguistic Analysis tools (i.e., Named Entity Recognition, Sentiment Analysis, Emotion Analysis, Grooming Taxonomy Classification, and Authorship Analysis) provide advanced capabilities for analysing textual data collected by the online data gathering tools. In particular, they are designed to identify named entities (e.g., people, locations, organisations, and time-based events), assess sentiment (i.e., positive, negative, or neutral), detect emotions (i.e., happiness, anger, fear, sadness, disgust, and surprise), classify content according to grooming behaviour stages, and analyse writing patterns and stylistic factors.
3. Risk Assessment
The Risk Assessment tool estimates the risk level related to the existence of potential grooming behaviour in online spaces. In particular, the tool utilises the outcomes of the Grooming Taxonomy Classification, applied to user messages and comments in the online conversations of interest, and provides an estimation of the risk level per user related to the existence of potential grooming incidents. Overall, four levels of risk are supported: (i) Very Low: Grooming behaviour is highly unlikely; (ii) Low: Grooming behaviour is unlikely; (iii) Moderate: Grooming behaviour is likely; (iv) High: Grooming behaviour is highly likely.
More info:
e-mail: cesagram@iti.gr