Website Mining

Whenever the method of website mining is used to access great amount of matter information from completely different sources and therefore this task becomes terribly unmanageable as information recovery is tough and time intense. This website mining could be a vital task of the online pages.

Most skilled students and researchers will purpose to some content on the online that they themselves have write, like on their 'own or clients' websites, or on sites they maintain for skilled, personal, or analysis interests. As an easy analysis, they could attempt to realize that content exploitation solely in a general net computer program.

If, as is probably going, they'll compose a fastidiously worded search question by take back terribly specific options of the content, like its title keyword, key phrase, the name of the web site etc, so forth, they stand an inexpensive probability of success, with their content showing on the primary page of the search results. On the opposite hand, if they raise some other person to search out the content, somebody UN agency doesn't aware of the all data about it terribly well and that, after all, is often the condition beneath that we have a tendency to hunt down net content the probabilities of success is extremely less. The content may perhaps be found, probably once trying numerous search queries and scrolling through several pages of search results. Website mining could be a method that's usually used for mining the content, reducing the search time and search area and obtains a much better search end in a probe engine ranking page [SERP].

If we've got any question in an exceedingly SERP that's semantically similar with another question than decide the results of this semantically similar question in an exceedingly websites result we have a tendency to use set that contains a collection of similar words for a selected sense of a word. In spite of that linguistics similarity between entities changes over time and across domains. Page count [4] of a question is associate degree estimate of the amount of pages that contain the query words. In general, page count might not essentially be adequate the word frequency as a result of the queried word may seem repeatedly on one page. Page count and Text snippets area unit are 2 necessary strategies to see the potency and similarity of 2 words. For as associate degree example we have a tendency to take a question (A) as an Apple a multination company and (B) as a promiscuous fruit of the fruit tree, these each queries are the results of projected set technique with the experimental advantage of page count. Apple is often related to computers on the online. However, this sense of apple isn't listed in most all-purpose thesauri or dictionaries. A user, UN agency searches for apple on the online, may be curious about this sense of apple and not apple as a fruit. New words are perpetually being created moreover as new senses area unit appointed to existing words. Manually maintaining ontology's to capture these new words and senses is dear if not possible.

There propose associate degree automatic technique to estimate the linguistics similarity between the words or content exploitation net search engines. Attributable to the immensely varied documents and therefore the high rate of the online, it's time intense to research every document one by one. Net search engines give associate degree economical interface to the current large data. Page counts and snippets are 2 helpful data sources provided by most net search engines. Page count [4] of a question is associate degree estimate of the amount of pages that contain the query words. In general, page count might not essentially be adequate the word frequency as a result of the queried word may seem repeatedly on one page. Page count and Text snippets are 2 necessary strategies to see the potency and similarity of 2 words.



The Page Count Property returns an extended worth result that indicates the amount of pages with information in an exceedingly Record set object. Use the Page Count property to see what number pages of knowledge are present within the Record set object. Albeit the last page is incomplete as a result of fewer records than the Page Size worth, it counts as a further page within the Page Count result. If the Record set object doesn't support this property, then value are present here to point that the Page Count is indeterminable. Some SEO tools are use for page count. Example- web site link count checker, count my page, net word count.


Text Snippets are typically won't to clarify that the means of a text otherwise "cluttered" operate, or to attenuate the employment of continual code that's common to different functions. Snip management could be a feature of some text editors, program ASCII text file editors, IDEs, and connected package.

Search optimized additionally known as Knowledge-Discovery in Databases (KDD) or Knowledge-Discovery [9] and Search optimized, is that the method of mechanically looking massive volumes of knowledge for patterns exploitation tools like ranking, association algorithm for mining, clustering, etc. Search optimized could be a complicated topic and has links with multiple core fields like applied science and adds worth to make seminal process techniques from statistics, data retrieval, machine learning and pattern recognition.

Search optimized techniques are the results of an extended method of analysis and products development. This evolution began once business information was initial hold on computers, continued with enhancements in information access and a lot of recently, and generated technologies that permit users to navigate through their information in real time. Search optimized takes this biological process on the far side retrospective information access and navigation to prospective and proactive data delivery. Search optimized is prepared for application within the profession as a result of it's supported by 3 technologies that are currently sufficiently mature:

' Massive information assortment

' Powerful digital computer computers

' Search optimized algorithms.

With the explosive growth of data sources accessible on the globe Wide net, it's become progressively necessary for users to utilize automatic tools in realize the specified data resources, and to trace and analyze their usage patterns. These factors make to the requirement of making server aspect and shopper side intelligent systems which will effectively mine for information. Net mining [6] are often generally outlined because the discovery and analysis of helpful data from the globe Wide net. This describes the automated search of data resources accessible on line, i.e. website mining, and therefore the discovery of user access patterns from net servers, i.e., net usage mining.


The process of content mining with the good thing about extract the semantically similar word and obtain a prime ranking with in any webpage is commonly sophisticated, timely and manually intensive. It are often a waste of your time if your pages aren't structured in an exceedingly computer program friendly manner. Hence, SEO are often thought of because the medium one will use to speak with a probe engine, in order that it is aware of precisely what your computing device is regarding. It's additionally regarding achieving a high level of computer program visibility through a good kind of well-optimized keyword phrases that are directly associated with your business. Used properly outline algorithmic rule those scale back the time quality SEO refers to a range of techniques won't to improve an online page's keyword or keyword phrase computer program ranking. SEO is a crucial fact of search promoting. It's a technical method of manipulating a web site with the aim of optimizing or promoting keyword search phrases relevant to it website to the search engines, in order that they successively can index the location as extremely relevant to it keyword search phrase [2].


Procedure of mining the content online and finding the linguistics similarity between the words is a crucial task. Main problem throughout this procedure is that it can't transfer the newest updates from the mine information. No correct coordination between completely different Applications and Users.


John B. Killoran answers the 2 general questions: (1) what contributes to go for looking engine rankings? And (2) what will website creators and net masters do to create their content and sites easier to search out by audiences exploitation computer program? Skilled communicators will make it easier for audiences to search out their website through search engines: (1) take into account their net content's audiences and website's competitors once analyzing keywords; (2) insert keywords into web text that may seem on search engine results pages, and (3) involve their website and websites with different website creators. as a result of thriving computer program improvement needs goodish time, skilled communicators ought to more and more apply these lessons within the sequence bestowed during this tutorial and will maintain up to now with often times dynamical ranking algorithms and with the associated dynamical practices of search improvement professionals.

John B. Killoran projected a way For websites organic results, SERPs(Search Engine Result Page) usually default to listing 10 WebPages, that includes for every its title hyperlinked to the webpage, a 'snippet' of text typically excerpted from the page, and therefore the page's computer address (web address). Search engines collect their search index (corpus of net content) within the initial place principally by employing a spider to repeatedly crawl (surf) the online link by link and record new and updated pages, defunct links, so forth. The index includes the words on the crawled WebPages beside their location and attendant net cryptography. whereas this research report focuses principally on nontechnical means that of SEO, those to blame for websites got to grasp some hypertext markup language (hypertext markup language), the foremost elementary style of net cryptography, within which numerous tags and their attributes area unit wont to encipher the structure, design, and practicality of a webpage.

Bhupesh, Sandip, Ashish projected classification of patterns into teams in unattended method represents cluster. Clusters are often exhausted some ways and by researchers in several disciplines, like clusters are often done on the premise of queries submitted to go looking for the engine. This paper provides a summary of algorithms that are useful in computer program improvement. The algorithms discuss about the BB's Graph based cluster algorithmic rule, and customized thought based cluster algorithmic rule. All the algorithmic rule works on the premise of exactitude and recall values, that are useful in verify the potency of computer program queries.

Exploitation the question cluster algorithmic rule queries appear unrelated before the document cluster, once document cluster queries are associated with one another as a result of they're each coupled to the document cluster

Following dig. Represent the question cluster algorithmic rule.

Fig.1 clustering algorithm implementation2]

The algorithms delineated during this paper are absolutely capable of cluster computer program queries. If one performs experiment with these algorithms, then he/she finds higher exactitude and recall then the present question cluster strategies. Higher exactitude and recall values will increase the effectiveness of computer program queries, that doesn't add further burden to the users. Additionally with the assistance of those algorithms one will improves prediction accuracy and process price. Future work is often extended by recovering exactitude and recall values for computer program queries.

Resnik projected similarity live exploitation data content. He outlined the similarity between 2 ideas C1 and C2 within the taxonomy because the most of the data content of all ideas C that subsume each C1 and C2. Then, the similarity between 2 words is outlined because the most of the similarity between any ideas that the words belong to. They projected a similarity live that uses shortest path length, depth, and native density in an exceedingly taxonomy. Their experiments reported a high Pearson coefficient of correlation of nearby zero on the Miller and Charles benchmark information set. They failed to appraise their technique in terms of similarities among named entities.

Lin outlined the similarity between the 2 ideas because the data that's in common to each idea and therefore the data contained in every individual thought.

Wen-Xue Tao, Wan-Li Zuo analyzes that besides international importance scope, native importance scope (i.e. question sensitiveness) is quite necessary for website ranking algorithmic rule to order question results accurately. And these 2 scopes cannot substitute one another. Supported the higher than observation, a brand new page-ranking algorithmic rule is projected, and a few combined approaches are investigated. Preliminary experiment results demonstrate the effectiveness of the improved algorithmic rule.

Harmunish Taneja, Richa Gupta projected associate degree algorithmic rule that assigns a rank to each document on the online that specify the standard of that document or the relative level of trust one will created on a document. The Page Rank algorithmic rule is employed within the Google computer program for ranking search results.

This algorithmic rule is largely a question freelance algorithmic rule that takes an online graph as associate degree input and assigns a rank to each document which might specify the relative authorization of that document on the online.

Harmunish Taneja, Richa Gupta demonstrates however the online link structures often won't to give the ranking to numerous documents. This ranking are often provided offline. With the assistance of this approach one will rate the assorted documents on the online freelance of the question. But a whole score computation relies on numerous different factors. Within the projected algorithmic rule a damping issue is employed that play an awfully necessary role on the analysis of the algorithmic rule. Once the analysis it's all over that damping issue should not be hand-picked nearer to zero. At the damping issue one, the system enters into the best state and therefore the ranking provided is insignificant. The projected algorithmic rule is question freelance algorithmic rule and doesn't take into account question throughout ranking.

The results made by these techniques are largely complementary to ours. However, our techniques need a neighborhood setting shopper and server application as in initial phase; no syntax analysis or further data concerning the search results is important.


Given 2 words P and alphabetical character (Q), we have a tendency to model the matter of measuring the linguistics similarity between P and Q, as one of constructing a operate sim(P;Q) and apply this operate within the cluster algorithmic rule that returns a value in vary of zero and 1. If P and Q are extremely similar (e.g., synonyms), then cluster algorithmic rule merge these node as one black node. On the opposite hand, if P and alphabetic character are semantically similar, then we have a tendency to expect sim(P,Q) to be come back the one node.

If P and alphabetic character aren't similar (e.g., synonyms), then cluster algorithmic rule left these node. On the opposite hand, if P and alphabetic character are not semantically similar, then we have a tendency to expect sim(P,Q) to be come back the zero value.

Sim (x, y) = |N(x) ' N(y)|/|N(x) U N(y)|,

If |N(x) N(y)| & gt ; 1,

0, otherwise

If there outline varied options that specific the similarity between P and alphabetic character exploitation snippets retrieved from an online computer program for the 2 words. exploitation this feature illustration of words, we have a tendency to propose a pattern extraction algorithmic rule to extract the semantically similar word from the server aspect and merge these similar words into the one single node and classification of similar and no similar word pairs done by the cluster algorithmic rule.

Fig. one illustrates associate degree example of exploitation the projected technique to cipher the linguistics similarity between 2 words P AND alphabetic character. First, we have a tendency to question an online computer program and retrieve page counts for the 2 words (i.e., 'P,' 'Q,' and 'P AND Q'). There projected four similarity scores word exploitation page counts. Page counts-based similarity scores take into account the full co-occurrences of 2 words on the online or any native setting. On the opposite hand, text snippets came back by a probe engine represents the native context within which 2 words co-occur on the online. Consequently, we discover the frequency of diverse lexical patterns in snippets came back for the mix question of the 2 words. The lexical patterns we have a tendency to utilize extracted mechanically exploitation the strategy delineated in an initial outline page count based approach for question P AND alphabetic character each page counts-based similarity content and lexical pattern cluster is employed to outline numerous options that represent the relation between 2 words. Exploitation this pattern illustration of word pairs, we have a tendency to propose a pattern extraction algorithmic rule model dig. to indicate that however will pattern of n-data word add into the info base table[19].

Page Count-Based Co-Occurrence Measures Page counts for the question P AND alphabetic character are often thought-about as associate degree approximation of co-occurrence of 2 words (or multiword phrases) P and alphabetic character on the online. However, page counts for the question P AND alphabetic character alone don't accurately specific linguistics similarity. for instance, Google returns 10.41,00,00,000 because the page count for 'Apple fruit' AND 'Apple i-phone,' whereas the 13.60,00,000 for 'banana' AND 'apple i-phone' though, banana is a lot of semantically the same as apple fruit than apple i-phone, page-counts for the question 'apple fruit' AND 'apple i-phone' are quite fourfold bigger than those for the question 'banana' AND 'apple i-phone.' One should take into account the page counts not only for the question P AND alphabetic character, however additionally for the individual words P and alphabetic character to assess linguistics similarity between P and alphabetic character.

For mix the word with similarity use the mix of each partitioning cluster and grid cluster technique. Combination of those cluster strategies is projected as a graph segregate technique.

Fig.2 Pattern Extraction model for n word object

The graph Segregate technique it's outlined on information portrayed within the style of a graph G = (V, E), with V vertices and E edges, such it's attainable to partition G into smaller elements with specific properties. For example, a k-way partition divides the vertex set into k smaller elements. An honest partition is outlined collectively within which the amount of edges running between separated elements is little. Uniform graph partition could be a sort of graph partitioning drawback that consists of dividing a graph into elements, such the elements of regarding a similar size are few connections between the elements. Necessary applications of graph partitioning embody scientific computing, partitioning numerous stages of a VLSI style circuit and task planning in multi-processor systems. Recently, the uniform graph partition drawback has gained importance thanks to its application for cluster and detection of cliques in social, pathological and biological networks. Different formulations moreover as a survey on the recent trends in process strategies and applications are often found in Problem quality

Typically, graph partition issues make up the class of NP-hard issues. Solutions to those issues are usually derived exploitation heuristics and approximation algorithms. However, uniform graph partitioning or a balanced graph partition drawback is often shown to be NP-complete to approximate inside any finite issue. Even for special graph categories like trees and grids, no cheap approximation algorithms exist unless P=NP. Grids are a very attention-grabbing case since they model the graphs ensuing from Finite component Model (FEM) simulations. Once not solely the amount of edges between the elements is approximated, however additionally the sizes of the elements, it is often shown that no cheap absolutely polynomial algorithms exist for these graphs.

A. Problem

Consider a graph G = (V, E), wherever V denotes the set of n vertices and E the set of edges. For a (k, v) balanced partition drawback, the target is to partition G into k elements of at the most size v' (n/k), whereas minimizing the capability of the perimeters between separate elements. [1] Also, given G associate degreed an whole number k; once partition V into k components (subsets) V1, V2, ...,Vk such the components are disjoint and have equal size, and therefore the variety of edges with endpoints in numerous components is decreased . Such partition issues are mentioned in literature as approximation or resource augmentation approaches. A standard extension is to hyper graphs, wherever a grip will connect quite 2 vertices. A hyper edge isn't cut if all vertices are in one partition, and cut precisely once otherwise, in spite of what number vertices are on either side. This usage is common in electronic style automation.

B. Analysis

For a particular (k, one + ??) balanced partition drawback, we have a tendency to ask for to search out a minimum price partition of G into k elements with every part containing most of (1 + ??) ' (n/k) nodes. We have a tendency to compare the price of this approximation algorithmic rule to the price of a (k, 1) cut, whereby every k elements should have precisely the same size of (n/k) nodes, therefore being a lot of restricted drawback. Thus, We already grasp that (2,1) cut is that the minimum division drawback and it's NP complete.[6] Next we have a tendency to assess a 3-partition drawback whereby n = 3k, that is additionally finite in polynomial time.[1] Now, if we have a tendency to assume that we've got associate degree finite approximation algorithmic rule for (k, 1)-balanced partition, then, either the 3-partition instance are often resolved exploitation the balanced (k,1) partition in G or it can't be resolved. If the 3-partition instance are often resolved, then (k, 1)-balanced partitioning drawback in G are often resolved while not cutting any edge. Otherwise if the 3-partition instance can't be resolved the optimum (k, 1)-balanced partitioning in G can cut a minimum of one edge. Associate degree approximation algorithmic rule with finite approximation issue must differentiate between these 2 cases. Hence, it will solve the 3-partition drawback that could be a contradiction beneath the idea that P = NP. Thus, it's evident that (k, 1)-balanced partitioning drawback has no polynomial time approximation algorithmic rule with finite approximation issue unless P = NP. [1]

The plate like centrifuge theorem states that any n-vertex plate like graph is often partitioned off into roughly equal components by the removal of O ('n) vertices. This is often not a partition within the sense delineated higher than, as a result of the partition set consists of vertices instead of edges. However, a similar result additionally implies that each plate like graph of finite degree contains a balanced cut with O ('n) edges.

1. Graph partition methods

Since graph partitioning could be an exhausting drawback, sensible solutions are supported heuristics. There are 2 broad classes of strategies, native and international. Accepted native strategies are the Kernighan'Lin algorithmic rule, and Fiduccia-Mattheyses algorithms, that were the primary effective 2-way cuts by native search ways. Their major disadvantage is that the absolute initial partitioning of the vertex set, which might have an effect on the ultimate resolution quality. International approaches consider properties of the whole graph associate degreed don't consider an absolute initial partition. The foremost common example is spectral partitioning, wherever a partition springs from the spectrum of the contiguity matrix.

2. Multi-level methods

A multi-level graph partitioning algorithmic rule works by applying one or a lot of stages. Every stage reduces the dimensions of the graph by collapsing vertices and edges, partitions the smaller graph, then maps back and refines this partition of the first graph. [7] A good kind of partitioning and refinement strategies is often applied inside the general multi-level theme. In several cases, this approach will provide each quick execution times and extremely prime quality results. One wide used example of such associate degree approach is breed, [8] a graph practitioner, and the corresponding practitioner for hyper graphs.[9]

3. Software tools

One of the first [citation needed] in public accessible package known as Chaco [15] is thanks to Hendrickson and Leland. As most of the in public accessible package, [citation needed] Chaco implements the structure approach printed higher than and basic native search algorithms. Moreover, they implement spectral partitioning techniques.

METIS [8] could be a graph partitioning family. M??tis is focused [weasel words] on partitioning speed and hmetis,[9] that could be a hyper graph partitioner, aims at partition quality.[weasel words] ParMetis [8] could be a parallel implementation of the breed graph partitioning algorithmic rule.

PaToH [16] is additionally a widely[citation needed] used hyper graph partitioner that produces high quality[citation needed] partitions.

Scotch [17] is graph partitioning framework by Pellegrini. It uses algorithmic structure division and includes consecutive moreover as parallel partitioning techniques.

Jostle [18] could be a consecutive and parallel graph partitioning convergent thinker developed by Chris Walshaw. The commercial version of this partitioner is thought as Networks.

If a model of the communication network is offered, then Jostle and Scotch are ready to take this model under consideration for the partitioning method.[citation needed]

Party [19] implements the Bubble/shape-optimized framework and therefore the useful Sets algorithmic rule.

The package packages DibaP[20] and its MPI-parallel variant PDibaP[21] by Meyerhenke implement the Bubble framework exploitation diffusion; DibaP additionally uses AMG-based techniques for coarsening and finding linear systems arising within the spreading approach.

Sanders and Charles M. Schulz discharged a graph partitioning package KaHIP[22] (Karlsruhe prime quality Partitioning) that specialize in resolution quality.[peacock term] It implements for instance flow-based strategies, more-localized native searches and a number of other parallel and consecutive meta-heuristics.

To address the load leveling drawback in parallel applications, distributed versions of the consecutive partitioners breed, Jostle and Scotch are developed.[citation needed]

The tools Parkway [23] by Trifunovic and Knottenbelt moreover as Zoltan[24] by Devine et al. target hypergraph partitioning.

List of free open-source frameworks:

Name License Brief information

Chaco GPL software package implementing spectral techniques and therefore the structural approach

DiBaP * graph partitioning supported structure techniques, pure mathematical multigrid moreover as graph based mostly diffusion

Jostle * multilevel partitioning techniques and spreading load-balancing, sequential and parallel

KaHIP GPL several parallel and consecutive meta-heuristics, guarantees the balance constraint

kMetis Apache 2.0 graph partitioning package supported structure techniques and k-way native search

Mondriaan LGPL matrix partitioner to partition rectangular thin matrices

PaToH BSD multilevel hyper graph partitioning

Parkway * Parallel structure hyper graph partitioning

Scotch CeCILL-C implements structure algorithmic division moreover as diffusion techniques, consecutive and parallel

Zoltan BSD Hyper graph partitioning

After fetching the right data from the base information, their projected a Recommender or filtering System algorithmic rule for the n word data object.

C. Filtering System algorithm rule

The text databases consist most of giant assortment of documents. They collect this data from many sources like news articles, books, digital libraries, e-mail messages, and websites etc. thanks to increase quantity of data, the text databases are growing chop-chop. In several of the text databases the info is semi structured.

For example, a document could contain many structured fields, like title, author, publishing date etc. however beside the structure information the document additionally contains unstructured text elements, like abstract and contents. While not knowing what may be within the documents, it's tough to formulate effective queries for analyzing and extracting helpful data from the info. To check the documents and rank the importance and connectedness of the document the users want tools. Therefore, text mining has become in style and essential theme in data processing.

D. Information Retrieval

Information Retrieval deals with the retrieval of data from sizable amount of text-based documents. A number of the information systems aren't sometimes gift in data retrieval system as a result of each handle completely different forms of information. Following are the samples of data retrieval system:

' Online library catalog system.

' Online Document Management Systems.

' Web Search Systems etc.

Note: the most drawback in data retrieval system is to find relevant documents in an exceedingly document assortment supported user's question. This sort of user's question consists of some keywords describing associate degree data want.

In such quite search drawback the user takes initiative to tug the relevant data out from the gathering. This is often acceptable once the user has ad-hoc data want i.e. short term want. however if the user has future data want then the retrieval system also can take initiative to push any fresh arrived data item to the user.

This kind of access to data is named data filtering. And therefore the corresponding systems are referred to as Filtering Systems or Recommender Systems.

1. Basic Measures for Text Retrieval

We need to envision however correct the system is once the system retrieved variety of documents on the premise of user's input. Let the set of documents relevant to a question be denoted as and therefore the set of retrieved document as. The set of documents that are relevant and retrieved are often denoted as ' .This can be shown within the Venn diagram as follows:


There are 3 elementary measures for assessing the standard of text retrieval:

' Precision.

' Recall.

' F-score


Precision is that the share of retrieved documents that are in reality relevant to the question. Exactitude is often outlined as:

Precision= | ' | / ||


Recall is that the share of documents that area unit relevant to the question and were in reality retrieved. Recall is outlined as:

Recall = | ' | / ||


F-score is that the normally used trade-off. The data retrieval system typically must trade-off for exactitude or contrariwise. F-score is outlined as mean of recall or exactitude as follows:

F-score = recall x exactitude / (recall + precision) / two



We projected a linguistics similarity live exploitation each page counts and snippets retrieved from an online computer program for 2 words. Four word co-occurrence measures were computed exploitation page counts. We have a tendency to project a lexical pattern extraction algorithmic rule to extract varied linguistics relations that exist between 2 words. Moreover, a consecutive pattern cluster algorithmic rule was projected to spot completely different lexical patterns that describe a similar linguistic relation. Each page counts-based co-occurrence measures and lexical pattern clusters are present to outline the options for a word combine. A two-class SVM was trained exploitation those options extracted for similar and non synonymous word pairs hand-picked from WordNet, synsets. Experimental results on benchmark information sets showed that the projected technique outperforms numerous baselines moreover as antecedently projected web-based linguistics similarity measures, achieving a high correlation with human ratings. Moreover, the projected method improved the F-score in an exceedingly community mining

To ensure that their audiences will still simply realize their go through search engines, net developers ought to expect to stay up to now with the evolving search algorithms, SEO practices, their website's traffic, and their competition. Beside a number of the sources cited during this research, net developers will maintain up to now by frequently drawing on the dynamic SEO resources featured in associate degree annotated with the best result.

Source: Essay UK -

About this resource

This Information Technology essay was submitted to us by a student in order to help you with your studies.

Search our content:

  • Download this page
  • Print this page
  • Search again

  • Word count:

    This page has approximately words.



    If you use part of this page in your own work, you need to provide a citation, as follows:

    Essay UK, Website Mining. Available from: <> [27-05-20].

    More information:

    If you are the original author of this content and no longer wish to have it published on our website then please click on the link below to request removal: