داستان آبیدیک

hyperplane


فارسی

1 برق و الکترونیک:: ابر صفحه

based on two principal elements: a weight vector (function) "" and bias "" which is the distance of the hyperplane to the origin. negative samples close to the hyperplane are called Support Vectors (SV). In SVM algorithm the optimization problem is to find out the optimum hyperplane that separates the classes [60]. New samples satisfying KKT conditions are supposed to have no influence over updating the hyperplane. of the classifier, the second group of samples are those in the newly available dataset that violate the KKT conditions, and the third group of samples are those that are between the center of each class and the hyperplane.

واژگان شبکه مترجمین ایران


معنی‌های پیشنهادی کاربران

نام و نام خانوادگی
شماره تلفن همراه
متن معنی یا پیشنهاد شما
Captcha Code