Research in the Department of Music Technology and Acoustics of the
Hellenic Mediterranean University: An Overview and Prospects
SPYROS BREZAS*, STELLA PASCHALIDOU, CHRISOULA ALEXANDRAKI,
MAKIS BAKAREZOS, CHRISTINE GEORGATOU, KONSTANTINOS KALERIS,
MAXIMOS KALIAKATSOS-PAPAKOSTAS, EMMANOUIL KANIOLAKIS-KALOUDIS,
EVAGGELOS KASELOURIS, YANNIS ORPHANOS, HELEN PAPADAKI,
NEKTARIOS A. PAPADOGIANNIS, KATERINA TZEDAKI, NIKOLAS VALSAMAKIS,
VASILIS DIMITRIOU
1Department of Music Technology and Acoustics,
Hellenic Mediterranean University,
E. Daskalaki, Perivolia, 71433 Rethymnon,
GREECE
Abstract: - The Department of Music Technology and Acoustics of the Hellenic Mediterranean University
offers a unique higher education program in Greece, addressing the growing demand for specialists in music
technology, sound technology, and acoustics. It aims to educate specialized professionals in the rapidly
advancing scientific fields of music technology and acoustics, mainly driven by the swift progress in electronic
technology. The Department aims to address a gap in the professional market by producing highly skilled
graduates, capable not only of keeping up with the latest scientific and technological developments but also of
leading the way by introducing innovative approaches and methods. The Department combines art, science, and
technology, focusing on sound recording, analysis, synthesis, and music production. Music technology
encompasses various cutting-edge fields such as network music performance, artificial intelligence in music,
and music embodiment. Acoustics refers to fundamental aspects of sound as well as its generation,
transmission, and related phenomena. It includes research fields such as physical acoustics, optoacoustics, and
vibroacoustics. This overview presents the research activities, methodologies, and results. A discussion of
future research works and pointers to future technological evolution towards real-world music and acoustics
applications is also provided.
Key-Words: - music technology, acoustics, research activities, network music, artificial intelligence, acoustic
ecology, optoacoustics, musical acoustics.
Received: February 21, 2023. Revised: November 16, 2023. Accepted: December 17, 2023. Published: March 20, 2024.
1 Introduction
The Department of Music Technology and
Acoustics (MTA) of the Hellenic Mediterranean
University (HMU), [1], is unique in higher
education in Greece and fulfills the ever-growing
demands for specialized engineers in the fields of
music technology, sound technology, and acoustics.
At an international level, it is among the few BSc
programs with such an interdisciplinary orientation
in their studies, along with the BSc program of the
University of Edinburg entitled “Acoustics and
Music Technology”, [2] and that of the University
of Southampton, [3], entitled “Acoustics with
Music”. Greece lacks specialized human resources
in Music Technology and Acoustics nowadays the
evolution in electronic technology drives a
continuous progress and ever-growing professional
prospects in the field. The Department bridges this
gap by training highly skilled graduates proficient in
a wide variety of related fields. These individuals
are not just adept at keeping up with the latest
advancements in these fields but are also at the
forefront, pioneering innovative approaches and
techniques. The Department has a strong research
orientation and extroversion, through which students
are directly exposed to the developments in the field
at the national and international level, and the
development and establishment of research projects
and cooperation programs with Greek and foreign
universities and stakeholders.
The art of music and sound is strongly
interconnected with science (mathematics, physics)
and technology (informatics, electronics). Music
Technology, [4], [5], focuses on the study and the
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
1
Volume 11, 2024
design of mechanisms, algorithms, tools, software
applications, devices, electronic devices, and above
all their synergy as used by musicians to create,
perform, compose, notate, analyze, record and
process music and sound. Modern cutting-edge
fields (state-of-the-art) are Network Music
Performance, Music Information Retrieval,
Artificial Intelligence in Music, Machine Learning
in Audio and Music, Computational Musicology,
Electroacoustic Music, Acoustic Ecology and
Soundscape Ecology, Human Movement Sciences
and Technologies in Music, Gesture-Controlled
Interactive Audio and Music Systems and New
Musical Instruments. The science and technology of
Acoustics, [6], [7], studies the properties and
behavior of sound, as well as its applications.
Acoustics is the science of sound, including its
production, transmission, and effects, including
biological, physiological, and perceptual effects.
The study of acoustics revolves around the
generation, propagation, and reception of
mechanical waves and vibrations. The basic
subfields of Acoustics, [8] are: Physical Acoustics,
Bioacoustics, Engineering Acoustics, Architectural
Acoustics, Environmental noise, Musical Acoustics,
Psychoacoustics, Optoacoustics, Room Acoustics,
Vibroacoustics, and Ultrasonics.
In this work, an overview of the research in the
Department is presented. The research activities,
along with the scientific methodology applied at
each research subfield, are described in section 2,
and representative scientific results are presented in
section 3. This review of our research areas and
their subfields and methodologies aims to
summarize our research studies and motivate new
and innovative research works and pointers for
future potential enhancement of research
achievements on a real-world application level,
considering the strengthening of within-department
and international collaborations.
2 Research Activities and
Methodology
The scientific methods developed and applied in the
research fields of music and sound technology and
of acoustics, are here presented categorized in the
form of research subfields.
2.1 Music and Sound Technology
The main research subfields of music and sound
technology where the MTA significantly contributes
and exhibits leading expertise are music
embodiment, network music performance, artificial
intelligence in music, electroacoustic music
composition, and acoustic ecology and soundscape
ecology. These basic subfields are further presented
and analyzed along with the methods applied.
2.1.1 Music Embodiment
Traditionally, music has been studied through its
symbolic representation, i.e., music notation,
standing for the aesthetic value of a musical work as
a composition. Later, more emphasis was given to
its performative aspect, that is the expressive
rendition of a musical piece by a performer, studied
through the analysis of the musical signal, that is an
audio recording. In recent years we have been
observing a shift of interest with a growing number
of publications emphasizing the role of the human
body and its movement in music. The reason for this
shift lies in theories of embodied cognition—more
recently including embodied music cognition too,
[9], —that consider knowledge as an emergent
phenomenon that is acquired through the action of
doing and dispute the notion that cognition involves
representations, [10], [11]. Since multimodality has
been identified as a central quality of musical
experience and embodiment, [12], [13], empirical
studies of human body movement in music most
often involve capturing a plethora of multimodal
data by movement-related sensors and state-of-the-
art motion capture technologies, previously mostly
employed in the realm of cinema for animation
purposes.
Embodied music cognition is a term that
incorporates all music-related human body
movements, ranging from music performance to
music listening. Examples of experimental work
include, for instance, the study of effort in
Hindustani vocal improvisation on the occasions of
manual interactions with imaginary objects by
singers, [14]; the analysis and rendering of
multimodal data, especially so electromyography
data for analyzing muscle contraction as a measure
of bodily tension, performability and athleticism in
comparison to the musical tension of a composition,
[15]; the rhythmic and stylistic regional diversity of
musician-dancer interaction in traditional Cretan
dances [16]; or even the study of single drumming
strokes as loading conditions to FEM-BEM
mathematical models in cymbal vibroacoustic
behavior, [17].
For instance, [14], reports on the first
ecologically valid study of effort-related mappings
between sound sculpting gestures (gestural
interactions with imaginary objects) and the voice in
Hindustani Dhrupad vocal improvisation. The aim
was to devise formalized descriptions to infer the
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
2
Volume 11, 2024
amount of effort that such interactions are perceived
to require and classify gestures as interactions with
elastic versus rigid objects. The findings were
obtained through the application of an empirical
sequential mixed methodology in analyzing original
multimodal data of Dhrupad vocal improvisation
performances.
The data was collected for the specific study,
primarily in India, and includes interviews, audio-
visual material, and 3D movement data captured by
a passive marker-based optical motion capturing
system (Naturalpoint Optitrack). Seventeen
vocalists, encompassing both professionals and
amateurs, with diverse genders, ages, levels of
musical experience, and training durations, were
enlisted for the study, only two of whom were of
non-Indian origin. To ensure uniformity in gestural
resemblance, all selected individuals share the same
musical lineage as disciples of the esteemed vocalist
Zia Fariduddin Dagar, who was also recorded as
part of the study.
Participants were briefed solely on the study's
connection to music and movement, with no
additional details provided. Musicians received
compensation for the recording of performances at
an hourly rate, excluding interviews. Written
informed consent forms and recording agreements
release forms were signed by all participants. To
enhance ecological validity, musicians were
instructed to engage in improvisation without any
specific guidelines. All recordings took place in
domestic settings, typically in the living rooms
where musicians conduct their daily musical
activities, adapted to accommodate recording needs
as depicted in Figure 1.
Fig. 1: Typical equipment setup, photo taken at the
music school of maestro Zia Fariddudin Dagar in
Palaspe, Panvel in India, [14]
The methodology considered both first-person
and third-person perspectives in the analysis of
musical gestures, [18] and involved the integration
of qualitative ethnographic techniques (thematic
analysis of interviews and video observation
analysis) with quantitative methods (regression
analysis for effort inference and gesture
classification based on a combination of acoustic
and movement features). Figure 2 illustrates the
sequential mixed methodology that guided the
study, featuring gesture images sourced from [19].
The findings of this research will be detailed in the
Results Section 3.1.1.
The motivation for such systematic work is
twofold: On the one hand, to gain a deeper
understanding of various music traditions and
conditions, and on the other hand, to acquire
knowledge of how gesture-sound links deduced
from designed experiments could lead to human-
computer-interactions for music that are more
physically plausible, [18], [20]. Hence, the results of
such work may have implications for strategies for
the development of artificial gesture-sound
mappings in the design of electronic musical
instruments and interactive music and music-related
applications, [19], [21], [22].
Fig. 2: Sequential mixed methodology featuring
gesture images, [14]
2.1.2 Network Music Performance
Considering the performative aspect of music, the
last few decades have witnessed a significant body
of research efforts dedicated to enabling musicians
to collaborate in virtual environments, thus
circumventing the necessity for physical co-
presence. The relevant research area is known as
Networked Music Performance (NMP) and is
particularly challenging when considering true-
bidirectional audio-visual interactions of musicians
over computer networks, [23]. Compared to
common teleconferencing systems, NMP systems
have multiple requirements, which mainly account
for reducing communication latencies and
increasing the quality of live audio streams. Speech-
based human interaction, in teleconferencing and
VoIP applications, is highly tolerant to latency, with
an acceptable mouth-to-ear delay of 150-200 ms,
[24]. Unfortunately, in music performance, the
tolerable communication latency is lower by
approximately an order of magnitude, i.e. between
25-30 ms and it is known as Ensemble Performance
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
3
Volume 11, 2024
Threshold (EPT), or “the level of delay at which
effective real-time musical collaboration shifts from
possible to impossible”, [25].
Further to latency, NMP systems are
characterized by excessive requirements in network
throughput. This is mainly because, unlike speech,
most musical instruments have a broad acoustic
spectrum, hence requiring high sampling rates (at
least 44.1 kHz, as opposed to speech signals that
commonly use 8 kHz). Moreover, due to the
excessive requirements in reducing latency and
increasing throughput, live audio signals are highly
prone to sound distortions owing to data losses over
the network. In Wide Area Networks (WAN),
packet loss is frequently observed and caused by
congested network paths or by faulty networking
hardware across the path. In the case of audio,
losing network packets will result in dropouts at the
receiving end. Audio dropouts correspond to signal
discontinuities perceived as glitches, which, in the
case of NMP, can seriously hinder the collaboration
of music performers.
Despite almost three decades of research efforts,
the main challenges faced by the implementation of
NMP systems have not been defeated. NMP is a
vision rather than a technological affordance and
may only be feasible under certain conditions,
namely through highly reliable network
infrastructures (e.g. academic networks) and short
geographical distances. Nevertheless, the ongoing
progress in network and audio codec technology,
e.g. the Opus codec, which is currently the de facto
standard for real-time audio streaming over IP,
allows us to increasingly consider alternative setups
in which NMP may be feasible as well as useful,
[23].
Moreover, recent progress in computational
intelligence and machine musicianship suggests the
advancement of NMP applications equipped with
predictive capabilities to anticipate and reproduce
musicians’ performance remotely, thereby
mitigating latency and quality constraints. At
present, NMP research is highly interdisciplinary as
it involves numerous aesthetic, technical, and
perceptual aspects. Among other domains involving
networked collaboration, the COVID-19 pandemic
manifested a compelling need for the development
of systems supporting online music education. In
music learning and teaching, it is rather uncommon
for connected peers to simultaneously perform the
same piece of music. It is instead more common that
a teacher or a student will perform a musical
excerpt, and others will be required to imitate or
discuss the performance. This practice allows for
alleviating the excessive requirements in network
reliability and realizing more feasible NMP setups.
Researchers appear to be increasingly investing in
the development of novel applications and services
for online music learning, [26]. The innovation
capacity as well as the improvement of user
experience in such efforts can be significantly
enriched by considering research accomplishments
on artificial intelligence, musical acoustics, room
acoustics, and music embodiment.
2.1.3 Artificial Intelligence in Music
Researchers in MTA have examined generative
deep learning methods since their early
developments, mainly regarding the possibilities
that such methods offer for human control of the
musical output. This control could either be in the
form of pre-designed parameters, e.g., for
generating drum rhythms by adapting to new
contexts that they have not encountered during
training by proper annotation of compositional
conditions, [27]. Such models exhibited some
interesting adaptations to time signatures that were
not included during training. Or through real-time
interaction, either by changing rhythm and pitch-
related parameters by allowing the user to move a 2-
dimensional square within some predefined limits,
or by allowing musicians to improvise on a given
chart and having the system to create the proper
piano accompaniment, [28].
Additionally, such methods have proven useful
for visualizing large datasets under different
perspectives that are set by the user. Methods based
on Long Short-Term Memory (LSTM) and more
recent Transformer-based approaches have
produced interesting results that allow users to
explore large databases of jazz standard song charts
(i.e., scores with chord symbols) according to a
wide range of user-defined criteria. As in the case of
music generation, sequence learning methods
capture some high-level features that allow human
interaction and “communication” of some basic
concepts. For instance, given proper annotations,
some high-level features can be inferred from the
context, e.g., transformers were able to capture the
concept of 12-bar blues when trained to identify the
blues form from “raw” symbolic representation of
memory (without explicit annotations about the
length of a piece in bars).
Recent machine learning methods have exhibited
impressive results that have been developed to demo
or product-level by worldwide recognized
companies and universities. Such examples include
the ChatGPT family of dialogue systems and
DALLE 2 for image generation, among many
others. Even though such deep learning methods
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
4
Volume 11, 2024
capture high-level concepts, there is an open
question about the extent to which the result they
produce can be creative, [29]. To put the question
differently, can those methods create new
conceptual relations that can capture unlearned but
“entailing” interpretations that explain new, unseen
data in new ways, or allow the generation thereof
that is not only (impressively) adhering to specific
(learned) norms, but also actively recombining
knowledge in a way that reveals something new?
This is what creative humans do naturally, even
without the need for huge training datasets; imagine
what low quantities of “data” were available to J.S.
Bach.
A model that describes and explains human
creativity is Conceptual Blending, [30]. This model
has inspired the development of theoretical methods
that explain musical creative processes, [31] and
computational methods (based on Goguen’s
formulation, [32]) for music generation (among
other fields) that combine rule-based and
“traditional” machine learning approaches. Those
methods learn specific components of harmonic
knowledge from data and can recombine those
components effectively, creating new harmonic
spaces that integrate learned parts in new
arrangements that are justified through the
preservation of well-established musical principles,
described through rule-based formulations.
This generative combined approach has led to
the development of the CHAMELEON melodic
harmonization assistant, which has been evaluated
by composers of different levels, as an assistive tool
for making melodic harmonizations, [33]. Except
for that, a similar approach has been examined for
the cross-harmonization of jazz standards, [34],
where the melody of one song is harmonized with
the harmonic space of another song. Similar
approaches have been examined for melody and
drums rhythm, generation. In the latter two
approaches, the “creative part” involved blending
high-level features of melodies and drum rhythms.
The “generative” component, i.e. rendering those
features to music, was carried out by genetic
algorithms that targeted the high-level features
mentioned above in their fitness functions.
What sets such research apart from current trends
in machine learning generative methods is the fact
that the creative component is separated from the
generative component. This approach in some sense
identifies the necessary creative processes on a
“System 2” level, while it takes advantage of the
“System 1” inherent properties of machine learning
methods. More details for this discussion can be
found in a popular article, [35].
2.1.4 Electroacoustic Music Composition -
Musical Instruments
MTA is represented by academic members in the
Hellenic Association of Electroacoustic Music
Composers (HELMCA), [36] and members of the
International Confederation of Electroacoustic
Music (CIME/ICEM), [37]. Electroacoustic Music
(EAM) is the artistic field that is in a constant search
for innovation, both during research and in the
application, of electroacoustic technology for the
aesthetic creation of sound forms and the musical
organization of sound, [38], [39], [40], [41]. Its
close relationship with the evolution of musical
technology, as well as contemporary aesthetic
searches, makes it a constantly renewed field of
research and artistic creation, [42]. The field of
EAM is a convergence of the techniques and
technologies of sound composition, processing, and
projection as well as aesthetic criteria and analytical
methodologies for the study and creation of sound
forms, [43].
From a historical perspective the first field of
applications in music technology is the invention of
new (in their time) musical instruments. Present
studies of design and invention integrate different
materials and technologies for sound production,
manipulation, and the diffusion of their sound.
These research studies present musical instruments
that can be electronic, electroacoustic, hybrid,
digital, for one or more musicians, «intelligent»,
extended, networked, automated, robotic, and
interactive, [44], [45]. Their design demands
knowledge and skills from various fields of science,
music, and technology, [44], [45], [46]. From the
beginning of the 20th century until today many new
musical instruments have been invented, but a few
of them have been survived and still in use today,
like Theremin, Onde Martenot, Analogue Modular
Synthesizers, and others. Over the past two decades,
there has been an ever-increasing interest in the
research, design, and use of new electronic musical
instruments, and members of MTA work in this
field.
2.1.5 Acoustic Ecology and Soundscape
Ecology
An artistic and in parallel scientific subfield of
research in MTA is acoustic ecology and
soundscape ecology, [47], [48]. Acoustic ecology,
also known as soundscape studies, is a field that
explores the interaction between humans and their
surroundings through sound, [49]. Soundscape
ecology investigates the acoustic connections
between various living organisms, including
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
5
Volume 11, 2024
humans, and their environment, whether they reside
in marine or terrestrial ecosystems.
The soundscape terminology was established by
R. Murray Schafer, as the sonic environment,
technically any portion of the sonic environment
regarded as a field of study. The term refers to
actual environments or abstract constructions such
as musical compositions and tape montages,
particularly when considered as an environment,
[50]. Schafer’s terminology helps to express the idea
that the sound of a particular locality (its keynotes,
sound signals, and sound marks) can - like local
architecture, customs, and dress - express a
community’s identity to the extent that settlements
can be recognized and characterized by their
soundscapes, [49].
Acoustic ecology can set the base for a dynamic
monitoring instrument for the evaluation of the
environment by observing the acoustic signature of
species and landscapes, [51]. Acoustic indices are
increasingly being used when analyzing
soundscapes to gain information on biodiversity and
describe the environment. There has been
considerable interest and research to develop and
compute acoustic indices that represent the
characteristics of the soundscape, [52], [53], [54].
Ongoing research, in the frame of a PhD thesis,
focuses on the soundscape of the White Mountain in
Crete using analytical methods of acoustic ecology,
soundscape ecology, and ecoacoustics. The way
soundscape analysis can provide a feasible approach
for the environmental monitoring of the acoustically
unknown areas in the area, is investigated.
Additionally, the observation of the soundscape as a
monitoring system of an ecosystem function and
sustainability and the analysis of the acoustic
activity are introduced as methods which can
efficiently recognize the changes in the ecological
integrity and can be a valuable tool for
environmental management.
Furthermore, several research studies on specific
soundscapes of Crete and Greece have been
conducted in MTA in the framework of Bachelor
Theses. A mixed methodology of recording and
analysis has been developed and applied to those
projects. The study, recording, and preservation of
the sound diversity (analogous to biodiversity) of
protected natural areas aim not only to the
advantage of scientific knowledge but also to the
awareness of an ecological balance of the world
through sound. This aim is served through
soundscape musical composition that brings this
ideal balance to the focus of attention, [55].
2.2 Acoustics
The main research subfields of acoustics where the
MTA is significantly contributing and exhibiting
leading expertise are physical acoustics,
optoacoustics, ultrasonics, vibroacoustics, and
musical acoustics. These basic subfields are further
presented and analyzed along with the methods
applied.
2.2.1 Physical Acoustics, Optoacoustics,
Ultrasonics
Researchers of MTA in collaboration with the
Institute for Plasma Physics and Lasers (IPPL) of
the Centre of Research and Innovation of HMU are
experts in laser-matter interaction and have
developed whole-field dynamic laser interferometry
methods, [56], capable of studying the dynamic
behavior of the irradiated matter and the generation
and propagation of ultrasonic waves. These
interferometric methods have very high spatial and
temporal resolution, while nanosecond and
femtosecond laser pulses are used. A white-light
interferometry method has also been implemented to
monitor permanent damages and to evaluate the
ablation depth of the laser-irradiated samples.
Moreover, pump-probe transient reflectivity optical
setups, [57], have been developed capable of
studying the generation of nano-acoustic strains in
thin metallic films (such as Au, Ag, Ti, Ta)
deposited on substrates (such as Si, ZnO, glass)
using optoacoustic transduction on the laser
irradiated samples.
Researchers of MTA are experts in numerical
modeling and simulations of coupled multiphysics
problems using basic numerical methods like the
Finite Element Method (FEM), the Finite Difference
Method (FDM), and the Boundary Element Method
(BEM). Numerical methods can simulate
complicated processes, i.e., when the material
properties are anisotropic, viscoelastic, or
temperature-dependent, as well as complicated
geometries of dynamic structures. The numerical
methods have many advantages compared to the
analytical methods since the solution domain is
divided into many smaller domains that are allowed
to have different values of physical properties
and/or varying loading conditions. FEM is ideal for
solving complicated multiparametric problems like
laser-matter optoacoustic interactions. The
researchers of the Department have carried out
multiphysics FEM simulations to investigate the
interactions of pulsed lasers (ns, ps, or fs duration)
with thin solid films and study their dynamic
thermomechanical behavior, [56] and their
transduction efficiency. The FEM model is capable
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
6
Volume 11, 2024
to compute the phase changes of matter, to
concerning the material properties, as well as to
provide detailed insights into physical quantities
such as displacements, temperatures, velocities,
stresses, and plastic strains at every spatiotemporal
solution time step. These simulations can monitor
the generation and propagation of surface acoustic
waves (SAWs), [56] and longitudinal waves, [57]
and are validated by experimental interferometric
measurements and pump-probe techniques
developed by researchers of the Department.
To study the dynamic behavior of the irradiated
samples via FEM numerical simulations a CAD
geometry of the structure to be irradiated is first
created, then a fine mesh geometry is generated, and
transient Multiphysics coupled thermal-structural
analysis takes place. The heat conductivity and
mechanical wave propagation equations are solved,
while the laser source is considered as a heat source
loading term for the simulations. Elastoplastic
material properties as well as temperature-
dependent material (thermal and mechanical)
properties are considered for the simulations. The
modeled structure can be any type of material such
as metal, polymer, semiconductor, composite, or
metamaterial. The appropriate boundary conditions
should be also provided for the developed model.
An important aspect of the developed models is that
the Lagrangian mesh may be locally adaptive
depending on the simulation needs.
The recent development of compact and cost-
effective nanosecond, picosecond, and femtosecond
laser systems, capable of inducing breakdown in
ambient air or other gases and solids, has facilitated
the widespread integration of laser-plasma sound
sources across scientific and industrial domains.
Typical LPSS applications include, among others,
Laser-Induced Breakdown Spectroscopy (LIBS),
[58], non-destructive testing and diagnostics [59],
signal transmission for underwater or air-water
communication, and military applications, [60]. The
Department has made significant contributions and
exhibits leading expertise in the study,
characterization, and exploitation of laser-plasma
sound sources in ambient air. Laser-plasma sound
sources (LPSSs) in ambient air are generated by
optoacoustic transduction of fast (nanosecond) or
ultrafast (picosecond - femtosecond) laser pulses
with sufficient energy focused on a gas gaseous,
liquid, or solid target. In ambient air, laser-induced
breakdown (LIB) with consequent generation of an
LPSS requires optical intensities above the threshold
of approximately 2×1011 W/cm2. Laser breakdown
triggers a fast thermalization process in the air
which results from the interaction of hot free
electrons with heavy particles (ions, atoms, and
molecules) within the ionization volume. The
thermalized air bubble undergoes rapid expansion
and elastic contraction, hence generating rapid
pressure fluctuations that lead to the emission of an
acoustic pulse. After the emission and propagation
of the pulse away from the source, the medium
returns to its initial state. The acoustic pulse has a
characteristic time-domain profile of an N-pulse
with a duration that can span from a few
microseconds to tens of microseconds, depending
on the characteristics of the incident laser radiation.
Concurrently, the excited volume emits light due to
luminescent processes and localized thermal
excitation. Previous research carried out by MTA in
collaboration with the IPPL of the HMU and the
Electrical and Computer Engineering (ECE)
Department of the University of Patras (UP) has
revealed a correlation between the optical and
acoustic signals generated by laser-induced
breakdown in air, enabling the prediction of the
acoustic emission of the LPSSs from the respective
light emission.
In terms of frequency content, the generated
acoustic N-pulse exhibits a low-end response with a
first-order high-pass profile (see also Results
section). For nanosecond laser pulses, this spectral
range extends from subsonic frequencies (< 20 Hz)
to the upper audible frequency range (~ 20 kHz) or
the near ultrasounds (< 50 kHz). Femtosecond laser
pulses typically yield a wider frequency range that
extends well into the mid-ultrasound range (> 500
kHz), at a cost of reduced acoustic energy within the
audible range. At the high-frequency end of the
spectrum, the acoustic N-pulses exhibit a well-
defined response that diminishes with increasing
frequency. The peak pressure levels achieved by
LPSS can exhibit significant variation, ranging from
barely perceivable (a few decibels) to exceptionally
loud (130 dB or higher). This variability
predominantly depends on the total optical energy
deposited into the targeted medium, such as ambient
air. MTA and the aforementioned partners have
carried out extensive research on the correlation
between laser radiation characteristics, namely laser
pulse energy, duration, wavelength, and focusing
conditions, and acoustic pulse characteristics,
particularly acoustic pulse pressure, energy, and
duration, [61].
Additionally, the geometry of laser-plasma sound
sources ranges from a completely spherical (point-
like) configuration to an elongated one (cylindrical
or line-like). This variability is contingent upon the
characteristics of the incident laser radiation and
results in acoustic emissions spanning the entire
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
7
Volume 11, 2024
range from fully omnidirectional to highly
directional. Specifically, tightly focused short laser
pulses, typically in the nanosecond range, create
point-like plasma sources characterized by
dimensions of the order of a millimeter emitting
spherical sound waves. Conversely, the loose
focusing of ultra-short laser pulses, in the
picosecond or femtosecond range, gives rise to the
formation of line-like sources, commonly known as
laser-plasma filaments. These filaments typically
possess sub-millimeter thickness and lengths that
can vary from a few millimeters to hundreds of
meters, contingent on the laser pulse energy and
focusing conditions. Laser-plasma filaments
produce cylindrical acoustic waves known for their
distinct directional and propagation characteristics.
Recently, MTA and IPPL and collaborating partners
have introduced a computational model that allows
for the estimation of the acoustic directivity of the
resulting plasma source in the far field, considering
its specific geometry, [61]. Also, the team has
shown that energy deposition by non-linear
interaction of femtosecond laser pulses with ambient
air can be effectively modeled by use of Particle-In-
Cell (PIC) codes, [59]. Moreover, a research group
including members of the MTA has proposed the
exploitation of optoacoustic transduction for the
reproduction of audio signals via massless, spatially
unbound, movable, and potentially remote LPSSs,
[62], [63].
2.2.2 Vibroacoustics, Musical Acoustics
MTA is among the pioneers in the study of musical
instruments using laser-based interferometric
techniques. Experimental modal testing allows the
identification of modal parameters of vibrating
musical instruments such as natural frequencies,
mode shapes, and modal damping for the
substructures of the musical instruments. Modal
analysis is a combination of a method for the
calculation of the frequency response function
(FRF) of the vibrating instrument and the
visualization of the resonant modes. This analysis
has been the topic of numerous studies in the past
years, [64], [65].
A widely used modal analysis technique is based
on the impulse response measurements utilizing an
impact hammer and an accelerometer, [66]. Using
the signals of the applied force and a kinematic
quantity (e.g. acceleration) the FRF can be
calculated by various estimators, which are ratios of
the auto and cross spectra of the input and output
signals. These measurements can provide the FRF,
which contains information about the frequencies
where the resonances occur and can lead to
vibration characteristics of the system such as
damping etc. On the other hand, they do not provide
any information concerning the vibration mode
itself, e.g., how the system vibrates and the
amplitude distribution. For this, holographic
methods are applied, which are also used in the field
of musical acoustics, [66], [67], [68], [69].
An interferometric technique, namely Electronic
Speckle Pattern Interferometry (ESPI), [66], is used
in the Department for the visualization of vibration
modes. It is a technique that uses laser light together
with detection video recording and processing to
visualize static and dynamic displacements of the
measuring objects. A brief description of ESPI
follows. A laser beam is split in two. The first,
illuminates the vibrating object, the reflected beam
is combined with the second beam, and the resulting
hologram is captured by a CCD camera. The
resulting images contain spatial information of the
vibrating surface, along with amplitude information
of the perpendicular to the surface vibration. The
combination of the impulse response measurement
with the ESPI results provides a detailed description
of the vibrating instrument.
Finite element analysis is ideal for predicting
how musical instruments respond to any kind of
force loads, vibrations, and variations in
environmental conditions (temperature, relative
humidity, etc.), [70], while in musical acoustics, the
BEM formulation is commonly used to calculate the
sound radiated by simulated musical instruments,
[71]. Researchers in the Department have simulated
the vibroacoustic behavior of string musical
instruments such as the violin and the bouzouki,
[66], as well as the vibroacoustic behavior of
percussion musical instruments, [17], [72]. The
vibroacoustic behavior of the string musical
instruments has also been validated by experimental
measurements using ESPI, impact hammer, and
psychoacoustic tests, [66], thus a collaboration of
researchers performing simulations and experiments
has been accomplished in the Department. In the
study of the vibroacoustic behavior of cymbals via
FEM-BEM simulations, motion experiments have
also been performed providing accurate data for the
drumstick cymbal interaction, [17]. Results of this
study may be further used as reverse engineering
inputs, to machine learning models for the
estimation of geometrical and mechanical
parameters of cymbals from audio signals,
enhancing the collaboration between the faculty
members of the Department.
Regarding the methodology for studying the
vibrational and acoustic behavior of musical
instruments via numerical simulations a CAD
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
8
Volume 11, 2024
geometry of the musical instrument is initially
created, based on the real geometric characteristics
or via 3D scanning, then a fine mesh geometry is
generated and various numerical analyses: such as
modal, frequency response function, harmonic, [73],
in the frequency domain as well as time domain
FEM-BEM vibroacoustic analysis, [17], can later be
performed depending on the needs of the research.
For all these different simulations proper material
properties, loading, and boundary conditions are
provided to the developed model.
3 Results
The research in the scientific areas and the subfields
described above, has led to novel results and
cutting-edge applications. Representative research
results and key findings are presented here.
3.1 Music and Sound Technology
3.1.1 Music Embodiment
Regarding the aforementioned study (see section
2.1.1) which explores the embodied aspects of
Hindustani/Dhrupad vocal music, the findings
indicate that while Dhrupad singers exhibit some
variability in how they use their hands during
singing, there remains a level of consistency in how
they associate effort with different melodies and
types of gestural interactions with imaginary
objects. However, various cross-modal associations
are identified among the vocalists, which depend on
the specific melodic mode (raga), the mechanics of
vocal production, the structure of the improvisation
(alap), and analogies between different domains.
For instance, in the case of Dhrupad vocalist
Afzal Hussain, the effort level exerted by the
vocalist’s body is associated with:
1. The pitch range within each octave, specifically
the highest note reached during the ascending
portion of the melodic glide (whether in the upper or
lower part of the octave).
2. The melodic intention to ascend or move
toward the tonic note in the subsequent melodic
progression.
3. The size or interval of the ascending portion of
the melodic movement.
4. The asymmetry between ascending and
descending melodic glides, where ascending
corresponds to an intensification in effort, and
descending corresponds to an abatement in effort.
There is an observed trend for interactions with
elastic objects (stretching or pushing-compressing)
to be more effortful than interactions with rigid
objects (pulling gestures: medium levels; throwing:
low; two types of grips: effortless).
Additionally, in terms of gesture classification,
results have highlighted specific cross-modal
association trends between gesture classes
(interactions with elastic versus rigid imaginary
objects), effort levels (ranging between 0-10, 10
being the highest), melodic intention (ascent vs.
descent) and octave pitch range (low, high part of
octave); the latter being closely related to more
stable vs. unstable melodic movements for each
melodic mode (raga Jaunpuri).
The findings obtained from all participants also
demonstrate the feasibility of creating concise
models for estimating effort and classifying gestures
using a limited set of statistically significant
acoustic and movement features derived from the
original data. Two variations of these models were
developed: (a) one tailored to best suit each
performer and (b) one designed to capture shared
behaviors across different performers. Variations
between these models can be attributed to either
unique gestural styles, suggesting the need for an
individualized approach for each performer, or a
weakness of the selected low-level features in
capturing the fundamental aspects of the responses.
The latter could be due to the restricted dataset size
and the challenge of quantifying subjective aspects
that involve some level of interpretation.
In essence, these findings underscore a
substantial connection between effort and melodic
characteristics, although the relationship with
performers’ hand gestures appears to be less
straightforward, more personalized, and potentially
less consistent. For example, the subsequent
analytical representation of effort levels relies on a
general linear model derived from the original data,
which exhibits a higher degree of overlap across
performers in terms of the number and type of
features. It holds that, [14]:
𝑒𝑓𝑓𝑜𝑟𝑡 = −2.13 𝑓1 + 2.3 𝑓2 – 1.34 𝑓3 + 0.93
𝑓4 + 0.65 𝑓5
where:
f1 = starting minimum pitch on a relative
logarithmic scale ( about the tonic).
F2 = maximum pitch on an absolute logarithmic
scale (pitch height).
F3 = mean value of velocity calculated on the mean
position of the two hands.
F4 = SD of velocity calculated on the mean position
of the two hands.
F5 = mean hand distance according to handedness,
only used for one of the singers.
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
9
Volume 11, 2024
Figure 3 provides a schematic representation of
parameter mapping, highlighting the potential
integration of the linear model into evolving
Electronic Musical Instruments. It visually
represents the changes in acoustic and movement
features based on different levels of user effort.
These findings could inform the optimization of
Electronic Musical Instruments by incorporating
effort as an intermediary mapping layer. This
naturally prompts discussions about the possibility
of utilizing physiological data as a more intuitive
measure of effort in wearable technologies.
Fig. 3: Schematic overview of potential parameter
mapping in new Electronic Musical Instruments,
[14]
3.1.2 Networked Music Performance
MTA has contributed to the research domain of
NMP by presenting one of the early systems for
real-time music collaboration, called DIAMOUSES.
As realized through dedicated experiments,
DIAMOUSES can support a wide range of music
performance scenarios including music rehearsals,
music lessons, and jamming sessions, [23].
DIAMOUSES is presented as one of the first
systems investigating remote musical interactions at
the ‘Networked Music Performance’ article on the
Wikipedia and has produced some of the first
graphical user interfaces for NMP, [23], such as the
one shown in Figure 4.
In the years that followed, MTA contributed to
the MusiNet project, which used the Opus low-
latency audio codec to ease communication
throughout commonly available network
infrastructures. Part of this research was devoted to
investigating the perspective of NMP collaborations
in traditional music, thus promoting NMP as an
enabling technology for remote ethnic groups to
widely disseminate their musical culture. An
attempt of a Cretan music performance conducted
through the network is depicted in Figure 5.
In the post Covid-19 era, NMP research
endeavors in MTA were devoted to the development
of the MusiCoLab platform for online music
education. MusiCoLab provides a suite of
collaborative, web-based applications supporting
online music teaching and learning by facilitating
musical artifacts such as music scores (shown in
Figure 6) and backing tracks (shown in Figure 7) as
collaborative objects that may be manipulated and
transformed during synchronous and asynchronous
music learning. A distinct focus of MusiCoLab is to
make use of current achievements in artificial
intelligence for automatic music content analysis to
enhance engagement in music learning, [74].
Fig. 4: Graphical User Interface of the
DIAMOUSES project during a networked music
rehearsal
Fig. 5: Traditional music performance conducted
through the network using the MusiNet
infrastructure for NMP. Preparation of musicians to
before the experiment (Left). One of the musicians
collaborating through the network (Right)
Fig. 6: Collaborative transformations of music
notation artifacts in MusiCoLab
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
10
Volume 11, 2024
Fig. 7: Collaborative music learning using a backing
track in MusiCoLab
3.1.3 Artificial Intelligence in Music
The most interesting results regarding AI in music
have been produced by the development of the core
method that performs conceptual blending in
musical cadences, [75].
This method is the basis for building compound
harmonic spaces that allow melodic harmonization.
Building those compound spaces is closely related
to what human composers do: we get a new concept
(e.g. the tritone substitution) by recombining things
that are already known (perfect and phrygian
cadence) and use it in a way that reveals something
new about our world (i.e. that both-direction
resolutions are possible). Figure 8 shows the voice
relations that are involved in generating this result,
with bold lines/connections indicating strong
relations and thin lines weak but considerable
relations. This new concept can be employed in
various contexts provided by the melody. With this
method, the generative and the creative processes
are distinct: the generative part is creating the
concrete chords for a given melody; the “creative”
part has initially figured out the new component that
provides alternative interpretations of what we
already know (about how chord transitions work
within the initial spaces).
Fig. 8: Conceptual Blending of the Perfect and the
Phrygian cadences creates the tritone substitution
and the backdoor progression chord sequences
Fig. 9: Melody with segments in the C and F#
tonalities harmonized with the blended C and F#
Bach chorale major spaces
This method has produced results in various
directions, [76], regarding the creative exploration
of harmonic spaces, including blending of harmonic
idioms that are “incompatible” (meaning that they
have few common or similar chords) and blending
of harmonic spaces that result from transposed
versions of the same harmonic space (e.g. the
harmonic space of the Bach Chorales in C and F#
major tonalities). Especially in the latter case, the
method proved useful for producing harmonic
solutions for harmonizing melodies with implied
harmonic characteristics that are inconsistent with a
given harmonic space. The example in Figure 9
shows such an example, where segments of the
melody are composed in different tonalities.
Blending the harmonic space of the Bach Chorales
in C and F# major tonalities provides a solution to
this unconventional melody (regarding the style of
Bach Chorales) that recombines what is already
known (i.e. the space of Bach Chorales) and reveals
something new (i.e. that such remote tonalities can
be bridged by manipulating specific components of
the existing space).
Fig. 10: Two example cross-harmonizations with
“Solar” as Song A and (a) Giant Steps and (b) Time
Remembered as Song B. Melody mismatches are
indicated. In (a), melody mismatches are very often,
in contrast to (b)
The application of the method that was adjusted
for generating cross-harmonizations of jazz
standards (i.e. melody of one song reharmonized
based on the harmony of another), indicated that
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
11
Volume 11, 2024
there are still details in the generative part (i.e. the
part that applies chords to the given melody) that
need to be corrected. Figure 10 (taken from [34])
shows that in some cross-harmonizations, some
produced chords harmonize melodic segments that
include notes that are “incompatible” with the
chord; identifying those incompatibilities, however,
would require the formulation of extensive sets of
rules for each chord. It is a future challenge to
examine how deep learning models would help
toward implicit learning of such rules from data.
3.2 Acoustics
3.2.1 Physical Acoustics, Optoacoustics,
Ultrasonics
Representative results of pump-probe experimental
measurements along with FE results are shown in
Figure 11 from a research collaborative work of the
MTA researchers with IPPL, that studies the
photoacoustic transduction on Ta and Ti thin films,
[57]. Figure 11a presents comparative transient
reflectivity signals from Ta and Ti thin films coated
on Si <100> substrates. The reason behind the larger
modulation depth on the Ta sample, compared to the
Ti sample, is the result of enhanced photoacoustic
transduction. Figure 11b shows the time evolution
of the strain penetrating the substrate at 4 nm depth
from the interface with the metal, as calculated by
the FE model for both Ta/Si and Ti/Si samples. Due
to the larger Young’s modulus of Ta higher strains
are developed. Both experimental and simulation
results demonstrate that Ta exhibits superior
photoacoustic properties.
Fig. 11: a) Measured transient reflectivity signals for
the 25 nm Ta and Ti films on Si substrates showing
the electron excitation and Brillouin oscillations
characteristic feature, b) Calculated strain
distribution in 25 nm metal/Si systems as a function
of time, for a depth of 4 nm in the Si substrate.
Regarding the research on simulated
optoacoustic interactions, Figure 12a shows
representative results of the vertical component of
displacement for three different distances from the
center of an irradiated Si target via a 6 ns laser
pulse. Moreover, Figure 12b shows characteristic
contour plot results of vertical displacement for
three different temporal moments. The laser
intensity is in the thermoelastic regime, and it is
evident that an SAW is generated and propagates
through the sample. The speed of ultrasonic waves
can be calculated by dividing the distance between
two nodes by the time required for the wave to
propagate from one node to the other. Thus, the
propagation speed is calculated to be ~5000 m/s.
This value has low deviations from the value of
5200 m/s, which is found when using the analytical
equation found in [77], for SAW. A thermal
structural FEM simulation was performed, to obtain
this result, considering temperature-dependent
thermal and mechanical properties, while the laser
source of Gaussian distribution is considered as the
heat source loading term.
Regarding research on LPSS, new results from
the study of the acoustic directivity of LPSSs
generated by strongly focused nanosecond pulses
are here presented. Figure 13a shows a schematic
diagram of the LPSS measurement process. The
LPSSs are formed by laser pulses with a 6 ns
duration of 532 nm wavelength and 120 mJ energy,
focused by a 75 mm lens. For the acoustic
measurements, a high-dynamic range and broadband
microphone are used (G.R.A.S 46BE) connected to
a high-sampling rate audio interface (RME Fireface
802). Recording is done by Audacity software.
Figure 13b shows the acoustic directivity of the
generated laser-plasma sound sources measured at
12 cm from the source. The polar diagram of Figure
13b is calculated by taking the total acoustic energy
of the measured N-pulses on the azimuthal angles 0,
30, 60, and 90 degrees. The emission towards the
directions from 120 to 180 degrees is extrapolated
considering cylindrical symmetry. The source
exhibits a mild directionality as the acoustic
emission on the laser propagation axis is
approximately 5 dB lower than on the axis
perpendicular to the laser path. This is expected
since, as it is well known, even with strong focusing
the plasma source exhibits a slightly elongated
geometry that mainly occurs due to plasma back-
propagation, [78].
However, the directional characteristics of the
source become weaker with increasing observation
distance. Figure 14a shows the acoustic directivity
of the LPSS generated under the same conditions,
this time measured at 60 cm. Figure 14b shows the
respective acoustic frequency spectra where, for
reasons of completeness, three measurements on the
xz plane at elevation angles 30, 60, and 90 degrees
are also plotted. From the two diagrams, it becomes
evident that the plasma source exhibits an almost
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
12
Volume 11, 2024
perfect omnidirectional emission at 60 cm distance,
with a maximum deviation of less than 1 dB
between different angles. The measurements on the
xz plane validate the assumption of cylindrical
symmetry used to extrapolate the data.
Fig. 12: a) The evolution of vertical displacement
for three different distances from the center of an
irradiated Si target in the thermoelastic regime. b)
Contour plots of vertical displacement for three
temporal moments
Fig. 13: a) Schematic diagram of the process for
measuring the acoustic directivity of LPSSs, b)
acoustic directivity of LPSSs generated by strongly
focused nanosecond laser pulses with 6 ns duration,
532 nm wavelength and 120 mJ energy measured on
the xy plane at 12 cm distance
Fig. 14: a) Acoustic directivity of LPSSs generated
by strongly focused nanosecond laser pulses with 6
ns duration, 532 nm wavelength, and 120 mJ energy
measured on the xy plane at 60 cm distance and b)
respective acoustic frequency spectra with
additional xz plane measurements
3.2.2 Vibroacoustics, Musical Acoustics
Figure 15 shows the combination of impulse
response measurements along with ESPI results on a
carbon fiber bouzouki. The red dot highlights the
matching between a resonance frequency in the FRF
plot provided by impulse response measurements,
and the related vibration mode, as given by ESPI,
[65].
Figure 16 shows representative numerical results
of a drumstick-cymbal interaction via time-domain
FEM-BEM simulations, [17]. Figure 16a shows the
pressure distribution of the cymbal 3 ms after being
hit by the drumstick, while Figure 16b demonstrates
the computed normalized pressure at a point located
in the air. For this vibroacoustic simulation, the
drumstick body is considered a rigid body, while the
cymbal is considered a deformable body. A marker-
based motion capture (mocap) system is used for
capturing the real loading conditions to be used in
the FEM-BEM simulations during the drum-stick-
cymbal interaction. The recorded velocity and the
spatial coordinates of the drumstick and the cymbal
are adopted by the FEM-BEM model. FE is used to
model the 8-inch splash B20 bronze alloy cymbal,
while BE is used to model the surrounding air. The
point where the normalized sound pressure, arising
from the impact, is computed corresponds to an
assumed microphone position in the acoustic field.
Fig. 15: Combination of a frequency response
function and an ESPI image for the visualization of
the vibration mode, which is excited at the
resonance highlighted with the red dot
Representative results of ESPI experimental
measurements along with FE results are shown in
Figure 17, as part of a collaborative research effort
by MTA researchers investigating the vibroacoustic
characteristics of cymbals, [72]. Different CAD
approaches are used to model the complex geometry
of a curved splash cymbal. The CAD model
indicates the critical points of curvature changes and
allows for a good approximation by two three-point
arcs, able to interpolate the cymbals’ curved
geometry. A uniform thickness is assumed for the
developed geometry. Figure 17 shows the
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
13
Volume 11, 2024
experimental and numerical modal results that
present better agreement. The FEM cymbal model
has a thickness of 1.3 mm.
The experimental techniques that have been
previously described in combination with numerical
calculations can fully describe the vibrational and
acoustical behavior of musical instruments. The
application of the methodscombination during the
manufacturing process of musical instruments may
lead to assemblies of optimized vibration
characteristics.
Fig. 16: a) Pressure distribution of the splash
cymbal 3 ms after the initiation of the drum-stick-
cymbal interaction, b) Time domain FEM-BEM
results of sound pressure for the splash cymbal
Fig. 17: Four vibration modes and the related
resonant frequencies using ESPI and FEM
Regarding local Cretan Music, researchers of the
department conducted a vibrational analysis of the
Cretan lyra top plate, using ESPI, impulse response
analysis, and FE analysis. The vibration amplitude
distributions obtained by time average ESPI for
each eigenfrequency were in good agreement with
the predicted ones from the FE analysis. A
vibrational study for the full Cretan lyra assembly
was also performed, [63]. Time-average ESPI
experimental vibration analysis of the main normal
modes of two Cretan lyras of different periods, was
performed. The first one was a pear-shaped Cretan
lyra of 17th century and the second was a
contemporary one. The results showed that the
variations in observed normal frequencies were
primarily attributed to distinct geometric
characteristics and shapes. In general, the 17th-
century pear-shaped lyra was smaller in all
characteristic dimensions compared to its
contemporary counterpart. Consequently, its
resonance frequencies were higher for the
corresponding characteristic normal mode.
4 Conclusion, Discussion, and
Research Prospects
Research in MTA maintains a highly
multidisciplinary character that produces results in a
wide range of fields spanning from the area of
optoacoustics to human embodiment and artificial
intelligence. The unique and broad specializations
of the research and personnel, in combination with
high-end equipment that is available for studying
the physical properties of sound and music, have
opened new perspectives towards novel research in
music technology and acoustics. The scientific
novelty and cutting-edge know-how accumulated by
the members and groups provide strong potential for
the transition towards impactful scientific
applications. Such a transition is expected to be
further amplified by broader collaborations and the
incorporation of mature application-level scientific
components, such as artificial intelligence, in a
wider range of research activities within MTA.
Particularly, the research in MTA on percussion
sound classification by use of artificial intelligence
is quickly evolving towards product-level maturity.
The very limited relevant literature, [79], [80] shows
that the field is not explored and is highly
prospective in this direction, with future work
focusing on the formation of a large sound database
via a programmable motorized drummer machine.
Frequency response function modal analysis of
percussion instruments, such as cymbals, and
idiophones, and corresponding measurements of the
instrument’s sound spectrum will be performed. The
large sound database will be used to train machine
learning models to identify different geometrical
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
14
Volume 11, 2024
characteristics and material properties of the
instruments from their acoustic response. The
efficient drummer machine will enable the fast
formation of large databases of objective
measurements, effectively eliminating the subjective
human factor. This sound database will constitute a
reference for the validation of vibroacoustic
numerical models estimating percussion
instruments' sound synthesis.
Furthermore, the MTA team working on LPSS is
currently developing a novel method for the precise
characterization of phononic crystals and acoustic
metamaterials, [81], that is anticipated to
significantly boost the design and adoption of
acoustic meta-structures in real-world applications.
LPSS exploitation in acoustic perception
measurements, biomedical engineering e.g.,
ultrasound focusing for noninvasive treatment, and
study of sound-mater interaction by laser-driven
excitation of acoustic meta-structures is planned for
the near future. This optimal acoustic excitation tool
will be used complementarily to the motorized
excitation system for the evaluation of idiophones,
e.g., percussive instruments, with unprecedented
advantages, especially in real-time acoustic
monitoring of the manufacturing process.
Also, future work includes fusing pre-trained
models of images of vibrational modal patterns of
given 3D models, [82] and materials with music
instrument simulation tools, [83] into a complete
tool for the estimation of the impact of different
materials and structures in the sound of musical
instruments. Accurate implementation of such
models would allow instrument manufacturers to
make justified decisions during the instrument
design phase, but also to explore new materials,
shapes, and structures. Such a computational tool,
together with the ESPI method developed in MTA
for the experimental determination of vibrational
modes of acoustic musical instruments, will
constitute a complete solution for optimization and
innovation in musical instrument design.
Moreover, many possibilities emerge from the
idea of leveraging pre-trained machine learning
models in specific modalities and fusing them
through fine-tuning processes that require few
examples for training. One such example is the
development of a model that relates upper-body
gestures of Hindustani vocalists collected from
video and motion capture systems with raga-specific
melodic information of the singing voice. Such
models could prove useful for the accurate large-
scale automatic annotation of special gestures in
audio-visual datasets with Hindustani music,
allowing, for instance, users to search for
audiovisual segments within large databases
incorporating specific gestures (possibly performed
by the user as a system-input query). These research
approaches can be implemented both in situ and
remotely, taking advantage of the networked music
performance, as presented. Additionally, room and
soundscape acoustics may be also considered to
enable studies of human-environment music and
sound acoustics interactions.
The unique multidisciplinary approach to the
physics of sound, audio technology, and artistic
aspects of music exercised in MTA, together with
the growing collaboration between the various
groups of the Department, is expected not only to
broaden scientific interest in music technology and
acoustics but also lead to unique product-level
results.
Acknowledgement:
This work was supported by computational time
granted by the Greek Research & Technology
Network (GRNET) in the National HPC facility
ARIS under project ID pr013024-LaMPIOS II. S.
Brezas acknowledges the Hellenic Mediterranean
University, for the funding within the project
«Recording and metrological analysis of the
vibroacoustic characteristics of musical instruments
for the investigation of alternative and low-cost
materials and geometries with relevant sound
characteristics». The publication fees are financed
by the Project “Strengthening and optimizing the
operation of MODY services and academic and
research units of the Hellenic Mediterranean
University”, funded by the Public Investment
Program of the Greek Ministry of Education and
Religious Affairs.
References:
[1] Hellenic Mediterranean University, [Online].
https://mta.hmu.gr/en/home/ (Accessed Date:
November 1, 2023).
[2] Acoustics and Music Technology - BSc
(Hons), [Online].
https://www.eca.ed.ac.uk/study/undergraduat
e/acoustics-and-music-technology-bsc-hons
(Accessed Date: November 1, 2023).
[3] Acoustics with Music, [Online].
https://www.southampton.ac.uk/courses/acou
stics-with-music-degree-bsc (Accessed Date:
November 1, 2023).
[4] Hosken D., An Introduction to Music
Technology, Routledge, 2014, DOI:
10.4324/9780203539149.
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
15
Volume 11, 2024
[5] Mazzola G., Pang Y., Heinze W., Gkoudina
K., Pujakusuma G.A., Grunklee J., Chen Z.,
Hu T., Ma Y., Basic Music Technology: An
Introduction, Springer Cham, 2018,
DOI:10.1007/978-3-030-00982-3.
[6] Beranek L.L., Mellow T., Acoustics: Sound
Fields, Transducers and Vibration,
Academic Press, 2019.
[7] Alton Everest F., Pohlmann K.C., Master
Handbook of Acoustics, McGrawHill
Education Tab, 2021.
[8] What is Acoustics?, [Online].
https://acoustics.byu.edu/what-is (Accessed
Date: November 1, 2023).
[9] Leman M., Embodied music cognition and
mediation technology, MIT Press:
Cambridge, 2008, DOI:
10.7551/mitpress/7476.001.0001.
[10] Noë A., Action in perception (Representation
and mind), MIT Press: Cambridge, 2004.
[11] Varela F., Thompson E., Rosch E., The
embodied mind: Cognitive science and
human experience, MIT Press: Cambridge,
1991.
[12] Goldstein E., Sensation and Perception,
Wadsworth Publishing, 2016.
[13] Paschalidou S., Miliaresi, I., Multimodal
Deep Learning Architecture for Hindustani
Raga Classification, Sensors and
Transducers, Vol. 261(2), 2023, pp. 77-86.
[14] Paschalidou S., Effort inference and
prediction by acoustic and movement
descriptors in interactions with imaginary
objects during Dhrupad vocal improvisation,
Wearable Technologies, Vol. 3, 2022, e14,
DOI:10.1017/wtc.2022.8.
[15] Antoniadis P., Paschalidou S., Duval A.,
Jgo J.F., Bevilacqua F., Rendering
embodied experience into multimodal data:
concepts, tools and applications for Xenakis’
piano performance, Proceedings of the
International Centenary International
Symposium XENAKIS 22, Athens & Nafplio,
2022.
[16] Holzapfel A., Hagleitner M., Paschalidou S.,
Diversity of traditional dance expression in
Crete: Data collection, research questions,
and method development, Proceedings of the
First Symposium of the ICTM Study Group
on Sound, Movement, and the Sciences
(SoMoS), Stockholm, 2020.
[17] Kaselouris E., Paschalidou S., Alexandraki
C., Dimitriou V., FEM-BEM Vibroacoustic
Simulations of Motion Driven Cymbal-
Drumstick Interactions, Acoustics, Vol. 5(1),
2023, pp. 165–176, DOI:
10.3390/acoustics5010010.
[18] Leman M., Music, Gesture, and the
Formation of Embodied Meaning. In Musical
gestures: Sound, movement, and meaning,
Routledge, 2010.
[19] Mulder A., Design of Virtual Three-
dimensional Instruments for Sound Control,
PhD thesis, Rijks Universiteit Groningen,
1998.
[20] Castagne N., Cadoz C., A goals-based
review of Physical modelling, Proceedings
of the 2005 International Computer Music
Conference (ICMC 2005), Barcelona, 2005,
pp. 343-346.
[21] A NIME Reader, Fifteen years of new
interfaces for musical expression, Springer
Cham, 2017, DOI: 10.1007/978-3-319-
47214-0.
[22] Habiter (avec) Xenakis, [Online].
https://www.cda95.fr/fr/agenda/habiter-avec-
xenakis (Accessed Date: November 1, 2023).
[23] Alexandraki C., Akoumianakis D, Exploring
New Perspectives in Network Music
Performance: The DIAMOUSES
Framework, Computer Music Journal, Vol.
34(2), 2010, pp. 66-83, DOI:
10.1162/comj.2010.34.2.66.
[24] Wu X., Dhara K.K., Krishnaswamy V.,
Enhancing Application-Layer Multicast for
P2P Conferencing, Proceedings of the 4th
IEEE Consumer Communications and
Networking Conference, 2007, Las Vegas,
pp. 986-990.
[25] Schuett N., The Effects of Latency on
Ensemble Performance, Honors Thesis,
Stanford University, 2002.
[26] Akoumianakis D., Alexandraki C., Milios D.,
Nousias A., Synchronous Collaborative
Music Lessons and their digital materiality,
Proceedings of Web Audio Conference 2022
(WAC 2022), Cannes, 2022, DOI:
10.5281/zenodo.6768537.
[27] Makris D., Kaliakatsos-Papakostas M.,
Karydis I., Kermanidis K.L., Conditional
neural sequence learners for generating
drums’ rhythms, Neural Computing and
Applications, Vol. 31(6), 2019, pp. 1793-
1804, DOI: 10.1007/s00521-018-3708-6.
[28] Kritsis K., Kylafi T., Kaliakatsos-Papakostas
M., Pikrakis A., Katsouros V., On the
adaptability of recurrent neural networks for
real-time jazz improvisation accompaniment,
Frontiers in Artificial Intelligence, Vol. 3,
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
16
Volume 11, 2024
2021, 508727, DOI:
10.3389/frai.2020.508727.
[29] Kirkpatrick K., Can AI Demonstrate
Creativity?, Communications of the ACM,
Vol. 66(2), 2023, pp. 21-23, DOI:
10.1145/3575665.
[30] Fauconnier G., Turner M., The way we think:
Conceptual blending and the mind's hidden
complexities, Basic books, 2002.
[31] Zbikowski L.M., Conceptual blending,
creativity, and music, Music & Science, Vol.
22(1), 2018, pp. 6-23, DOI:
10.1177/1029864917712783.
[32] Goguen J.A., Harrell D.F., Style: A
Computational and Conceptual Blending-
Based Approach, The structure of style:
Algorithmic approaches to understanding
manner and meaning, Springer: Berlin, 2010,
DOI: 10.1007/978-3-642-12337-5_12.
[33] Zacharakis A., Kaliakatsos-Papakostas M.,
Kalaitzidou S., Cambouropoulos E.,
Evaluating human-computer co-creative
processes in music: a case study on the
CHAMELEON melodic harmonizer,
Frontiers in Psychology, Vol. 12, 2021,
603752, DOI: 10.3389/fpsyg.2021.603752.
[34] Kaliakatsos-Papakostas M., Velenis K.,
Pasias L., Alexandraki C., Cambouropoulos
E., An HMM-Based Approach for Cross-
Harmonization of Jazz Standards, Applied
Sciences, Vol. 13(3), 2023, 1338, DOI:
10.3390/app13031338.
[35] Bengio, Y., The consciousness prior. arXiv,
1709.08568, 2017, DOI:
10.48550/arXiv.1709.08568.
[36] HELMCA | Hellenic Electroacoustic Music
Composers Association, [Online].
https://www.essim.gr/index_en.html
(Accessed Date: November 1, 2023).
[37] Tamás Ungváry (1936 2024), [Online].
https://www.cime-icem.net/ (Accessed Date:
November 1, 2023).
[38] Emmerson S., Landy L., Expanding the
Horizon of Electroacoustic Music Analysis,
Cambridge University Press, 2016, DOI:
10.1017/CBO9781316339633.
[39] Roads C., Composing Εlectronic Μusic: Α
Νew Αesthetic, Oxford University Press,
2015.
[40] Valsamakis N., Aesthetics and Techniques in
the Electroacoustic Music of Iannis Xenakis,
Pella Publishing Company, 2009.
[41] Valsamakis N., Non-Standard Sound
Synthesis with Dynamic Models, PhD thesis,
University of Plymouth, 2013.
[42] Moore A., Sonic Art: An Introduction to
Electroacoustic Music Composition,
Routledge, 2016.
[43] Clarke M., Dufeu F., Manning P., Inside
Computer Music, Oxford University Press,
2020, DOI: 10.1017/S1355771823000043.
[44] Bovermann T., Campo A., Egermann H.,
Hardjowirogo S.-I., Weinzierl S., Eds.,
Musical instruments in the 21st century:
Identities, configurations, practices, Springer
Singapore, 2016, DOI: 10.1007/978-981-10-
2951-6.
[45] Patterson T., Instruments for New Music:
Sound, Technology and Modernism,
University of California Press, 2016.
[46] Wanderley M.M., Battier M., Eds., Trends in
Gestural Control of Music, IRCAM - Centre
Pompidou, 2000.
[47] Etmektsoglou I., Tzedaki K., Re-sounding
the message: Media aesthetic education
through the prisms of acoustic ecology and
psychology, Proceedings of the Global
Composition Conference on Sound, Media,
and the Environment, Darmstadt, 2012, pp.
19-23.
[48] Tzedaki K., Soundscape i
Kunsten/Soundscape Arts. Olso: NOTAM,
Organised Sound, Vol. 16, 2011, pp. 282-
283, DOI: 10.1017/S135577181100032X.
[49] Wrightson K., An Introduction to Acoustic
Ecology, Soundscape: Journal of Acoustic
Ecology, Vol. 1, 2000, pp. 10-13.
[50] Murray Schafer R., The Soundscape: Our
Sonic Environment and the Tuning of the
World, Destiny Books, 1993.
[51] Krause B., Farina A., Using ecoacoustic
methods to survey the impacts of climate
change on biodiversity, Biological
Conservation, Vol. 195, 2016, pp. 245–254,
DOI: 10.1016/j.biocon.2016.01.013.
[52] Gage S.H., Napoletano B.M., Cooper M.C.,
Assessment of ecosystem biodiversity by
acoustic diversity indices, Journal of the
Acoustical Society of America, Vol. 109(5
Suppl.), 2001, p. 2430, DOI:
10.1121/1.4744597.
[53] Pieretti N., Farina A., Morri D., A new
methodology to infer the singing activity of
an avian community: The Acoustic
Complexity Index (ACI), Ecological
Indicators, Vol. 11(3), 2011, pp.868–873,
DOI: 10.1016/j.ecolind.2010.11.005.
[54] Sueur J., Farina A., Gasc A., Pieretti N.,
Pavoine S., Acoustic Indices for Biodiversity
Assessment and Landscape Investigation,
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
17
Volume 11, 2024
Acta Acustica United With Acustica, Vol.
100(4), 2014, pp. 772-781, DOI:
10.3813/AAA.918757.
[55] Tzedaki K., Into the sounding environment A
compositional approach, De Montfort
University, 2011.
[56] Orphanos Y., Dimitriou V., Kaselouris E.,
Bakarezos E., Vainos N., Tatarakis M.,
Papadogiannis N.A., An integrated method
for material properties characterization based
on pulsed laser generated surface acoustic
waves. Microelectronic Engineering, Vol.
112, 2013, pp. 249-254, DOI:
10.1016/j.mee.2013.03.146.
[57] Κaleris K., Kaniolakis-Kaloudis E.,
Kaselouris E., Kosma K., Gagaoudakis E.,
Binas V., Petrakis S., Dimitriou V.,
Bakarezos M., Tatarakis M., Papadogiannis
N.A., Efficient ultrafast photoacoustic
transduction on Tantalum thin films, Applied
Physics A, Vol. 129, 2023, 527.
[58] Kifayat S., Shah H., Iqbal J., Ahmad P.,
Khandaker M.U., Haq S., Naeem M., Laser
induced breakdown spectroscopy methods
and applications: A comprehensive review,
Radiation Physics and Chemistry, Vol. 170,
2020, 108666, DOI:
10.1016/j.radphyschem.2019.108666.
[59] Jiang H., Qiu H., He N., Liao X., Research
on the optoacoustic communication system
for speech transmission by variable laser-
pulse repetition rates, Results in Physics,
Vol. 9, 2018, pp. 1291-1296, DOI:
10.1016/j.rinp.2018.04.050.
[60] Jones T.G., Hornstein M.K., Ting A.C.,
Wilkes Z.W., Intense underwater laser
acoustic source for Navy applications,
Journal of the Acoustical Society of America,
Vol. 125 (4 Suppl.), 2009, p. 2556, DOI:
10.1121/1.4783676.
[61] Kaleris Κ., Sound reproduction from laser-
driven pulsed acoustic sources
(Αναπαραγωγή ήχου μέσω παλμικών πηγών
οπτικής διέγερσης), PhD thesis, University
of Patras, 2021, DOI: 10.12681/eadd/50103.
[62] Kaleris K., Stelzner B., Hatziantoniou P.,
Trimis D., Mourjopoulos J., Laser-sound:
optoacoustic transduction from digital audio
streams, Scientific Reports, Vol. 11, 2021,
476, DOI: 10.1038/s41598-020-78990-z.
[63] Kaleris K., Stelzner B., Hatziantoniou P.,
Trimis D., Mourjopoulos J., Laser-Sound
Transduction from Digital ΣΔ Streams,
Journal of the Audio Engineering Society,
Vol. 70 (1/2), 2022, pp. 50-61, DOI:
10.17743/jaes.2021.0053.
[64] Fouilhe E., Goli G., Houssay A., Stoppani
G., Vibration modes of the cello tailpiece,
Archives of Acoustics, Vol. 36(4), 2011, pp.
713–726, DOI: 10.2478/v10168-011-0048-2.
[65] Duerinck T., Segers J., Skrodzka E.,
Verberkmoes G., Leman M., Van Paepegem
W., Kersemans M., Experimental
comparison of various excitation and
acquisition techniques for modal analysis of
violins, Applied Acoustics, Vol. 177, 2021,
107942, DOI:
10.1016/j.apacoust.2021.107942.
[66] Brezas S., Katsipis M., Orphanos Y.,
Kaselouris E., Kechrakos K., Kefaloyannis
N., Papadaki H., Sarantis-Karamesinis A.,
Petrakis S., Theodorakis I., Iliadis E.,
Karagkounidis T., Koumantos I., Tatarakis
M., Bakarezos M., Papadogiannis N.A.,
Dimitriou V., An Integrated Method for the
Vibroacoustic Evaluation of a Carbon Fiber
Bouzouki, Applied Sciences, Vol. 13(7),
2023, 4585, DOI: 10.3390/app13074585.
[67] Schedin S., Gren P.O., Rossing T.D.,
Transient wave response of a cymbal using
double-pulsed TV holography, Journal of the
Acoustical Society of America, Vol. 103(2),
1998, pp. 1217-1220, DOI:
10.1121/1.421254.
[68] Rossing T.D., Yoo J., Morrison A.,
Acoustics of percussion instruments: an
update, Acoustical Science and Technology,
Vol. 25(6), 2004, pp. 406-412, DOI:
10.1250/AST.25.406.
[69] Chatziioannou V., Reconstruction of an early
viola da gamba informed by physical
modeling, Journal of the Acoustical Society
of America, Vol. 145(6), 2019, pp. 3435–
3442, DOI: 10.1121/1.5111135.
[70] Gonzalez S., Salvi D., Baeza D., Antonacci
F., Sarti A., A data-driven approach to violin
making, Scientific Reports, Vol. 11, 2021,
9455, DOI: 10.1038/s41598-021-88931-z.
[71] Dalmont J.-P., Nederveen C.J., Joly N.,
Radiation impedance of tubes with different
flanges: Numerical and experimental
investigations, Journal of Sound and
Vibration, Vol. 244(3), 2001, pp. 505-534,
DOI: 10.1006/jsvi.2000.3487.
[72] Brezas S., Kaselouris E., Orphanos Y.,
Bakarezos M., Dimitriou V., Papadogiannis
N.A., Experimental and computational
vibroacoustic study of cymbals, Proceedings
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
18
Volume 11, 2024
of Forum Acusticum, Torino, 2023, pp. 1177-
1180.
[73] Torres J.A., Soto C.A., Torres-Torres D.,
Exploring design variations of the Titian
Stradivari violin using a finite element
model, Journal of the Acoustical Society of
America, Vol. 148(3), 2020, pp. 1496-1506,
DOI: 10.1121/10.0001952.
[74] Alexandraki C., Mimidis N., Viglis Y.,
Nousias A., Milios D., Tsioutas K.
Collaborative playalong practices in online
music lessons: the MusiCoLab Toolset.
Proceedings of the 4th International
Symposium on the Internet of Sounds, Pisa,
2023, DOI:
10.1109/IEEECONF59510.2023.10335236.
[75] Eppe M., Confalonieri R., Maclean E.,
Kaliakatsos M., Cambouropoulos E.,
Schorlemmer M., Codescu M., Kühnberger
K.-U., Computational invention of cadences
and chord progressions by conceptual chord-
blending, Proceedings of the Twenty-Fourth
International Joint Conference on Artificial
Intelligence, Buenos Aires, 2015, pp. 2445-
2451, DOI: 10.5555/2832581.2832590.
[76] Creative Blending of Harmonic Spaces ,
[Online].
https://ccm.web.auth.gr/blendedharmonisatio
ns.html (Accessed Date: November 1, 2023).
[77] Zhan Y., Liu C., Zhang F., Qiu Z.,
Experimental study and finite element
analysis based on equivalent load method for
laser ultrasonic measurement of elastic
constants, Ultrasonics, Vol. 69, 2016, pp.
243-247, DOI:10.1016/j.ultras.2016.03.014.
[78] Pandey P.K., Thareja R.K., Plume dynamics
of laser produced air plasma, Journal of
Physics: Conference Series, Vol. 208, 2010,
012091, DOI: 10.1088/1742-
6596/208/1/012091.
[79] Boratto T.H.A., Cury A.A., Goliatt L.,
Machine learning-based classification of
bronze alloy cymbals from microphone
captured data enhanced with feature selection
approaches, Expert Systems with
Applications, Vol. 215, 2023, 119378, DOI:
10.1016/j.eswa.2022.119378.
[80] Chhabra A., Singh A.V., Srivastava R.,
Mittal V., Drum Instrument Classification
Using Machine Learning, Proceedings of
2020 2nd International Conference on
Advances in Computing, Communication
Control and Networking (ICACCCN),
Greater Noida, 2020, DOI:
10.1109/ICACCCN51052.2020.9362963.
[81] Aravantinos-Zafiris N., Sigalas M.M.,
Katerelos D.T.G., Complete acoustic
bandgaps in a three-dimensional phononic
metamaterial with simple cubic arrangement,
Journal of Applied Physics, Vol. 133 (6),
2023, 065101, DOI: 10.1063/5.0127518.
[82] Gonzalez S., Salvi D., Baeza D., Antonacci
F., Sarti A., A data-driven approach to violin
making, Scientific Reports, Vol. 11, 2021,
9455, DOI: 10.1038/s41598-021-88931-z.
[83] Longo G., Gonzalez S., Antonacci F., Sarti
A., Predicting the acoustics of archtop
guitars using an AI-based algorithm trained
on FEM simulations, Proceedings of Forum
Acusticum, Torino, 2023, pp. 2965-2971.
Contribution of Individual Authors to the
Creation of a Scientific Article (Ghostwriting
Policy)
Vasilis Dimitriou, Maximos Kaliakatsos-
Papakostas, and Evaggelos Kaselouris carried out
the conceptualization.
Spyros Brezas, Stella Paschalidou, Chrisoula
Alexandraki, Christine Georgatou, Konstantinos
Kaleris, Maximos Kaliakatsos-Papakostas,
Emmanouil Kaniolakis-Kaloudis, Evaggelos
Kaselouris, Yannis Orphanos, Helen Papadaki,
Katerina Tzedaki, and Nikolas Valsamakis
implemented writing and original draft preparation.
Stella Paschalidou, Chrisoula Alexandraki, Makis
Bakarezos, Vasilis Dimitriou, Maximos
Kaliakatsos-Papakostas, Evaggelos Kaselouris,
Nektarios A Papadogiannis, Katerina Tzedaki, and
Nikolas Valsamakis were responsible for writing,
review and editing.
Spyros Brezas was responsible for the management
and coordination responsibility for the research
activity planning and execution and is the
corresponding author (denoted by *).
All authors have read and agreed to the published
version of the manuscript.
Sources of Funding for Research Presented in a
Scientific Article or Scientific Article Itself
No funding was received for conducting this study.
Conflict of Interest
The authors have no conflicts of interest to declare.
Creative Commons Attribution License 4.0
(Attribution 4.0 International, CC BY 4.0)
This article is published under the terms of the
Creative Commons Attribution License 4.0
https://creativecommons.org/licenses/by/4.0/deed.en
_US
WSEAS TRANSACTIONS on ACOUSTICS and MUSIC
DOI: 10.37394/232019.2024.11.1
Spyros Brezas, Stella Paschalidou,
Chrisoula Alexandraki et al.
P-ISSN: 1109-9577
19
Volume 11, 2024