other relevant service technologies, describe and
identify functions through the whole process of data
service, and better analyze the relationship between
data and data. Generally, the data processing system
mainly has different levels of content. The purpose
of the infrastructure layer is to provide data
information for each work, access physical
resources through terminal interfaces, and provide
critical interfaces for the virtualization process. The
virtualization layer makes use of virtualization tools
to summarize different data resources in the cloud
environment, and logically encapsulates all the
resources, and provides them to the platform layer
for subsequent development from the allocation and
scheduling process. The platform layer is the key
part of the whole modeling system, and it is also the
service layer for data analysis. Finally, there is the
application layer, which directly provides services
to users. If users want to manage the service
structure through processing and requests, they need
to expand their identity authentication through the
application layer.
2.2 Data Mining Services
In the whole process of data mining, each step is
closely related to each other. The process of target
data analysis is completed by building a model, and
the contents of the model are evaluated and
discussed with the help of initial resources, and its
practical application methods in subsequent work
are analyzed. From this point of view, we carry out
content analysis according to the problems, so as to
understand the work objectives to be achieved and
master the requirements of sales objectives in the
field. If we evaluate the behavioral trend of
consumers, we can analyze whether the existing
resources can meet the needs of users through data
mining. If we can satisfy the relevant information,
we can further analyze the behavioral process of
target data mining. The whole process is divided
into several parts. The first is the initial data
preparation, because the data processing process is
not only for the large amount of data on the
network, but also to clean up the data of many
answers to determine how some overlapping
resources are allocated. In addition, in the process of
classification and integration of the basic data, more
valuable indicators need to be found from the
existing data to achieve the overall cleaning and
impurity removal of the data, and finally complete
the operation loading process. Then is the data
collection, data mining work, the premise is to
collect all the data, according to the data collection
process in the problem planning. For example, the
data included in multiple files or systems will
inevitably overlap, so it is necessary to perform
repetitive cleaning and unified storage management
of different data. In the face of a large amount of
data, how to select valuable contents according to
the actual needs of consumers can reduce a large
number of invalid work, reduce the work scale of
the calculated data, and select the appropriate tuples
to the greatest extent on the premise of keeping the
original data unchanged. In general, the filtering
process of data requires unified management and
control of the same type of data, especially how to
automate data processing in the context of massive
data, which has become the key to process standard
management and control. In this process also
involves the processing of the wrong data. After
clarifying the essence of the wrong data, the existing
defects are corrected. If there are a few errors in a
large amount of data, it will not affect the overall
degree of data perfection. On the contrary, if the
error ratio is too high, directly deleting these data
will inevitably affect the accuracy of the entire data
set, and then affect the subsequent operation.
Therefore, it is necessary to consider how to deal
with the null values in the data set, for example,
choosing the means of professional experience
analysis and regression analysis to compensate for
the null values. In the process of data conversion,
the data attributes are discretized into different types
of interval, if the data in the interval is mapped into
the discrete value of the response. According to the
whole process of data processing, the analysis
technology process is closely related to the data
processing of data sources. The correctness and
integrity of this data will directly affect the quality
of data mining. However, the current cloud
computing architecture has a strong computing
capacity, which can provide a large amount of data
analysis for the daily behaviors of enterprises, so as
to facilitate the analysis of relevant characteristics of
commodity attributes and user behavior tendencies.
The modeling process of data mining is a key part of
data analysis [3].
3 Task Learning Methods for
Intelligent Marketing in High-Tech
Products
Product sales time is the active cycle of products in
the market, and is also a key indicator to measure
the market development of related industries. From
the micro level, product sales time is not only highly
concerned by the seller, but also the main reference
for potential consumers to evaluate. Because in
general, the shorter the product sales time is, the
WSEAS TRANSACTIONS on BUSINESS and ECONOMICS
DOI: 10.37394/23207.2022.19.50
Chung-Chih Lee, Hsing-Chau Tseng,
Chun-Chu Liu, Huei-Jeng Chou