Sitemap
A list of all the posts and pages found on the site. For you robots out there, there is an XML version available for digesting as well.
Pages
Posts
Future Blog Post
Published:
This post will show up by default. To disable scheduling of future posts, edit config.yml
and set future: false
.
Blog Post number 4
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 3
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 2
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
Blog Post number 1
Published:
This is a sample blog post. Lorem ipsum I can’t remember the rest of lorem ipsum and don’t have an internet connection right now. Testing testing testing this blog post. Blog posts are cool.
portfolio
Portfolio item number 1
Short description of portfolio item number 1
Portfolio item number 2
Short description of portfolio item number 2
publications
Design and Implementation of a Compiler for Simulation of Large-Scale Models
Published in Economics of Grids, Clouds, Systems, and Services, 2023
As systems become increasingly sophisticated, their state spaces become larger and the use of analysis and simulation techniques is impossible by a single-processor machine. Distributed simulation seems the natural way to afford this problem. However, the main bottleneck of distributed simulation is the management, compilation and deployment of large scale models. This paper presents the experimentation results of a compiler of large scale Petri-net based models.
Ensuring Reproducibility in AGD Detection: A Model Comparison Framework
Published in Conference on Detection of Intrusions and Malware & Vulnerability Assessment, 2024
Domain generation algorithms are commonly used by malware to generate command and control domains to contact during execution, avoiding having to use fixed IP addresses or DNS domains, which are easily blockable. During the last years, many solutions have been proposed for the detection of algorithmically generated domains (AGDs) based on artificial intelligence. However, there is no common umbrella that allows experiments to be replicated under the same conditions, making it difficult to ensure how good one solution is compared to the others. To address the current lack of a common environment for model comparison, in this work we present a framework focused on training and comparing artificial intelligence models for AGDs detection. As a use case, we have implemented and evaluated the models proposed in the latest works in this field, showing its applicability
Exploring the Zero-Shot Potential of Large Language Models for Detecting Algorithmically Generated Domains
Published in Conference on Detection of Intrusions and Malware & Vulnerability Assessment, 2025
Domain generation algorithms enable resilient malware communication by generating pseudo-random domain names. While traditional detection relies on task-specific algorithms, the use of Large Language Models (LLMs) to identify Algorithmically Generated Domains (AGDs) remains largely unexplored. This work evaluates nine LLMs from four major vendors in a zero-shot environment, without fine-tuning. The results show that LLMs can distinguish AGDs from legitimate domains, but they often exhibit a bias, leading to high false positive rates and overconfident predictions. Adding linguistic features offers minimal accuracy gains while increasing complexity and errors. These findings highlight both the promise and limitations of LLMs for AGD detection, indicating the need for further research before practical implementation.
The machines are watching: Exploring the potential of Large Language Models for detecting Algorithmically Generated Domains
Published in Journal of Information Security and Applications, 2025
Algorithmically Generated Domains (AGDs) are integral to many modern malware campaigns, allowing adversaries to establish resilient command and control channels. While machine learning techniques are increasingly employed to detect AGDs, the potential of Large Language Models (LLMs) in this domain remains largely underexplored. In this paper, we examine the ability of nine commercial LLMs to identify malicious AGDs, without parameter tuning or domain-specific training. We evaluate zero-shot approaches and few-shot learning approaches, using minimal labeled examples and diverse datasets with multiple prompt strategies. Our results show that certain LLMs can achieve detection accuracy between 77.3% and 89.3%. In a 10-shot classification setting, the largest models excel at distinguishing between malware families, particularly those employing hash-based generation schemes, underscoring the promise of LLMs for advanced threat detection. However, significant limitations arise when these models encounter real-world DNS traffic. Performance degradation on benign but structurally suspect domains highlights the risk of false positives in operational environments. This shortcoming has real-world consequences for security practitioners, given the need to avoid erroneous domain blocking that disrupt legitimate services. Our findings underscore the practicality of LLM-driven AGD detection, while emphasizing key areas where future research is needed (such as more robust warning design and model refinement) to ensure reliability in production environments.
RAMPAGE: a software framework to ensure reproducibility in algorithmically generated domains detection
Published in Expert Systems with Applications, 2025
As part of its life cycle, malware can establish communication with its command and control server. To bypass static protection techniques, such as blocking certain IPs in firewalls or DNS server deny lists, malware can use algorithmically generated domains (AGD). Many different solutions based on deep learning have been proposed during the last years to detect this type of domains. However, there is a lack of ability to compare the proposed models because there is no common framework that allows experiments to be replicated under the same conditions. Each previous work shows its evaluation results, but under different experimentation conditions and even with different datasets. In this paper, we address this gap by proposing a software framework, dubbed RAMPAGE (fRAMework to comPAre aGd dEtectors), focused on training and comparing machine learning models for AGD detection. Furthermore, we propose a new model that uses logistic regression and, using RAMPAGE to obtain a fair comparison with different state-of-the-art models, achieves slightly better results than those obtained so far. In addition, the dataset built from real-world samples for evaluation, as well as the source code of RAMPAGE, are also publicly released to facilitate its use and promote experimental reproducibility in this research field.
talks
Talk 1 on Relevant Topic in Your Field
Published:
This is a description of your talk, which is a markdown file that can be all markdown-ified like any other post. Yay markdown!
teaching
Private tutoring
Undergraduate course, N/A, 2024
Private tutoring for Advanced Technical Education (Grado Superior in Spain) students about in SQL, Linux and programming