hyperpose.Model.pose_proposal package¶
Submodules¶
hyperpose.Model.pose_proposal.define module¶
-
class
hyperpose.Model.pose_proposal.define.
CocoPart
¶ Bases:
enum.Enum
An enumeration.
-
Instance
= 1¶
-
LAnkle
= 13¶
-
LEar
= 17¶
-
LElbow
= 6¶
-
LEye
= 15¶
-
LHip
= 11¶
-
LKnee
= 12¶
-
LShoulder
= 5¶
-
LWrist
= 7¶
-
Nose
= 0¶
-
RAnkle
= 10¶
-
REar
= 16¶
-
RElbow
= 3¶
-
REye
= 14¶
-
RHip
= 8¶
-
RKnee
= 9¶
-
RShoulder
= 2¶
-
RWrist
= 4¶
-
-
class
hyperpose.Model.pose_proposal.define.
MpiiPart
¶ Bases:
enum.Enum
An enumeration.
-
Center
= 14¶
-
Headtop
= 0¶
-
Instance
= 15¶
-
LAnkle
= 13¶
-
LElbow
= 6¶
-
LHip
= 11¶
-
LKnee
= 12¶
-
LShoulder
= 5¶
-
LWrist
= 7¶
-
Neck
= 1¶
-
RAnkle
= 10¶
-
RElbow
= 3¶
-
RHip
= 8¶
-
RKnee
= 9¶
-
RShoulder
= 2¶
-
RWrist
= 4¶
-
-
hyperpose.Model.pose_proposal.define.
get_coco_flip_list
()¶
-
hyperpose.Model.pose_proposal.define.
get_mpii_flip_list
()¶
hyperpose.Model.pose_proposal.eval module¶
-
hyperpose.Model.pose_proposal.eval.
evaluate
(model, dataset, config, vis_num=30, total_eval_num=30, enable_multiscale_search=False)¶ evaluate pipeline of poseProposal class models
input model and dataset, the evaluate pipeline will start automaticly the evaluate pipeline will: 1.loading newest model at path ./save_dir/model_name/model_dir/newest_model.npz 2.perform inference and parsing over the chosen evaluate dataset 3.visualize model output in evaluation in directory ./save_dir/model_name/eval_vis_dir 4.output model metrics by calling dataset.official_eval()
- Parameters
- arg1tensorlayer.models.MODEL
a preset or user defined model object, obtained by Model.get_model() function
- arg2dataset
a constructed dataset object, obtained by Dataset.get_dataset() function
- arg3Int
an Integer indicates how many model output should be visualized
- arg4Int
an Integer indicates how many images should be evaluated
- Returns
- None
-
hyperpose.Model.pose_proposal.eval.
infer_one_img
(model, post_processor, img, img_id=- 1, is_visual=False, save_dir='./vis_dir/pose_proposal')¶
-
hyperpose.Model.pose_proposal.eval.
visualize
(img, img_id, humans, predicts, hnei, wnei, hout, wout, limbs, save_dir)¶
hyperpose.Model.pose_proposal.infer module¶
hyperpose.Model.pose_proposal.model module¶
-
class
hyperpose.Model.pose_proposal.model.
PoseProposal
(parts=<enum 'CocoPart'>, limbs=[(1, 8), (8, 9), (9, 10), (1, 11), (11, 12), (12, 13), (1, 2), (2, 3), (3, 4), (1, 5), (5, 6), (6, 7), (1, 0), (0, 14), (0, 15), (14, 16), (15, 17)], colors=None, K_size=18, L_size=17, win=384, hin=384, wout=12, hout=12, wnei=9, hnei=9, lmd_rsp=0.25, lmd_iou=1, lmd_coor=5, lmd_size=5, lmd_limb=0.5, backbone=None, pretraining=False, data_format='channels_first')¶ Bases:
tensorlayer.models.core.Model
- Attributes
- all_drop
all_layers
Return all layers of this network in a list.
- all_params
all_weights
Return all weights of this network in a list.
- config
- inputs
n_weights
Return the number of weights (parameters) in this network.
nontrainable_weights
Return nontrainable weights of this network in a list.
- outputs
trainable_weights
Return trainable weights of this network in a list.
Methods
__call__
(self, inputs[, is_train])Forward input tensors through this network by calling.
as_layer
(self)Return this network as a ModelLayer so that it can be integrated into another Model.
eval
(self)Set this network in evaluation mode.
forward
(self, x[, is_train, domainadapt])Network forwarding given input tensors
get_layer
(self[, name, index])Network forwarding given input tensors
infer
(self, x)Set this network in evaluation mode.
load
(filepath[, load_weights])Load model from a given file, which should be previously saved by Model.save().
load_weights
(self, filepath[, format, …])Load model weights from a given file, which should be previously saved by self.save_weights().
print_all_layers
(self)release_memory
(self)WARNING: This function should be called with great caution.
save
(self, filepath[, save_weights, …])Save model into a given file.
save_weights
(self, filepath[, format])Input filepath, save model weights into a file of given format.
test
(self)Set this network in evaluation mode.
train
(self)Set this network in training mode.
cal_iou
cal_loss
count_params
print_params
restore_coor
-
cal_iou
(self, bbx1, bbx2)¶
-
cal_loss
(self, delta, tx, ty, tw, th, te, te_mask, pc, pi, px, py, pw, ph, pe, eps=1e-06)¶
-
forward
(self, x, is_train=False, domainadapt=False)¶ Network forwarding given input tensors
- Parameters
- inputsTensor or list of Tensors
input tensor(s)
- kwargs :
For other keyword-only arguments.
- Returns
- output tensor(s)Tensor or list of Tensor(s)
-
infer
(self, x)¶ Set this network in evaluation mode.
-
restore_coor
(self, x, y, w, h)¶
hyperpose.Model.pose_proposal.train module¶
-
hyperpose.Model.pose_proposal.train.
get_paramed_map_fn
(hin, win, hout, wout, hnei, wnei, parts, limbs, data_format='channels_first')¶
-
hyperpose.Model.pose_proposal.train.
parallel_train
(train_model, dataset, config)¶ Parallel train pipeline of PoseProposal class models
input model and dataset, the train pipeline will start automaticly the train pipeline will: 1.store and restore ckpt in directory ./save_dir/model_name/model_dir 2.log loss information in directory ./save_dir/model_name/log.txt 3.visualize model output periodly during training in directory ./save_dir/model_name/train_vis_dir the newest model is at path ./save_dir/model_name/model_dir/newest_model.npz
- Parameters
- arg1tensorlayer.models.MODEL
a preset or user defined model object, obtained by Model.get_model() function
- arg2dataset
a constructed dataset object, obtained by Dataset.get_dataset() function
- Returns
- None
-
hyperpose.Model.pose_proposal.train.
regulize_loss
(target_model, weight_decay_factor)¶
-
hyperpose.Model.pose_proposal.train.
single_train
(train_model, dataset, config)¶ Single train pipeline of PoseProposal class models
input model and dataset, the train pipeline will start automaticly the train pipeline will: 1.store and restore ckpt in directory ./save_dir/model_name/model_dir 2.log loss information in directory ./save_dir/model_name/log.txt 3.visualize model output periodly during training in directory ./save_dir/model_name/train_vis_dir the newest model is at path ./save_dir/model_name/model_dir/newest_model.npz
- Parameters
- arg1tensorlayer.models.MODEL
a preset or user defined model object, obtained by Model.get_model() function
- arg2dataset
a constructed dataset object, obtained by Dataset.get_dataset() function
- Returns
- None
hyperpose.Model.pose_proposal.utils module¶
-
hyperpose.Model.pose_proposal.utils.
cal_iou
(bbx1, bbx2)¶
-
hyperpose.Model.pose_proposal.utils.
draw_bbx
(img, img_pc, rx, ry, rw, rh, threshold=0.7)¶
-
hyperpose.Model.pose_proposal.utils.
draw_edge
(img, img_e, rx, ry, rw, rh, hnei, wnei, hout, wout, limbs, threshold=0.7)¶
-
hyperpose.Model.pose_proposal.utils.
draw_results
(img, predicts, targets, parts, limbs, save_dir, threshold=0.3, name='', is_train=True, data_format='channels_first')¶
-
hyperpose.Model.pose_proposal.utils.
get_colors
(dataset_type)¶
-
hyperpose.Model.pose_proposal.utils.
get_flip_list
(dataset_type)¶
-
hyperpose.Model.pose_proposal.utils.
get_limbs
(dataset_type)¶
-
hyperpose.Model.pose_proposal.utils.
get_parts
(dataset_type)¶
-
hyperpose.Model.pose_proposal.utils.
get_pose_proposals
(kpts_list, bbxs, hin, win, hout, wout, hnei, wnei, parts, limbs, img_mask=None, data_format='channels_first')¶
-
hyperpose.Model.pose_proposal.utils.
non_maximium_supress
(bbxs, scores, thres)¶
-
hyperpose.Model.pose_proposal.utils.
postprocess
(predicts, parts, limbs, data_format='channels_first', colors=None)¶ postprocess function of poseproposal class models
take model predicted feature maps of delta,tx,ty,tw,th,te,te_mask, output parsed human objects, each one contains all detected keypoints of the person
- Parameters
- arg1list
a list of model output: delta,tx,ty,tw,th,te,te_mask delta: keypoint confidence feature map, shape [C,H,W](channels_first) or [H,W,C](channels_last) tx: keypoints bbx center x coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) ty: keypoints bbx center y coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) tw: keypoints bbxs width w, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) th: keypoints bbxs height h, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) te: edge confidence feature map, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last) te_mask: mask of edge confidence feature map, used for loss caculation, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last)
- arg2: Config.DATA
a enum value of enum class Config.DATA dataset_type where the input annotation list from, because to generate correct conf_map and paf_map, the order of keypoints and limbs should be awared.
- arg3string
data format speficied for channel order available input: ‘channels_first’: data_shape C*H*W ‘channels_last’: data_shape H*W*C
- Returns
- list
contain object of humans,see Model.Human for detail information of Human object
-
hyperpose.Model.pose_proposal.utils.
preprocess
(annos, bbxs, model_hin, modeL_win, model_hout, model_wout, model_hnei, model_wnei, parts, limbs, data_format='channels_first')¶ preprocess function of poseproposal class models
take keypoints annotations, bounding boxs annotatiosn, model input height and width, model limbs neighbor area height, model limbs neighbor area width and dataset type return the constructed targets of delta,tx,ty,tw,th,te,te_mask
- Parameters
- arg1list
a list of keypoint annotations, each annotation is a list of keypoints that belongs to a person, each keypoint follows the format (x,y), and x<0 or y<0 if the keypoint is not visible or not annotated. the annotations must from a known dataset_type, other wise the keypoint and limbs order will not be correct.
- arg2list
a list of bounding box annotations, each bounding box is of format [x,y,w,h]
- arg3Int
height of the model input
- arg4Int
width of the model input
- arg5Int
height of the model output
- arg6Int
width of the model output
- arg7Int
model limbs neighbor area height, determine the neighbor area to macth limbs, see pose propsal paper for detail information
- arg8Int
model limbs neighbor area width, determine the neighbor area to macth limbs, see pose propsal paper for detail information
- arg9Config.DATA
a enum value of enum class Config.DATA dataset_type where the input annotation list from, because to generate correct conf_map and paf_map, the order of keypoints and limbs should be awared.
- arg10string
data format speficied for channel order available input: ‘channels_first’: data_shape C*H*W ‘channels_last’: data_shape H*W*C
- Returns
- list
including 7 elements delta: keypoint confidence feature map, shape [C,H,W](channels_first) or [H,W,C](channels_last) tx: keypoints bbx center x coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) ty: keypoints bbx center y coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) tw: keypoints bbxs width w, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) th: keypoints bbxs height h, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) te: edge confidence feature map, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last) te_mask: mask of edge confidence feature map, used for loss caculation, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last)
-
hyperpose.Model.pose_proposal.utils.
restore_coor
(x, y, w, h, win, hin, wout, hout, data_format='channels_first')¶
-
hyperpose.Model.pose_proposal.utils.
visualize
(img, predicts, parts, limbs, save_name='bbxs', save_dir='./save_dir/vis_dir', data_format='channels_first', save_tofile=True)¶ visualize function of poseproposal class models
take model predicted feature maps of delta,tx,ty,tw,th,te,te_mask, output visualized image. the image will be saved at ‘save_dir’/’save_name’_visualize.png
- Parameters
- arg1numpy array
image
- arg2list
a list of model output: delta,tx,ty,tw,th,te,te_mask delta: keypoint confidence feature map, shape [C,H,W](channels_first) or [H,W,C](channels_last) tx: keypoints bbx center x coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) ty: keypoints bbx center y coordinates, divided by gridsize, shape [C,H,W](channels_first) or [H,W,C](channels_last) tw: keypoints bbxs width w, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) th: keypoints bbxs height h, divided by image width, shape [C,H,W](channels_first) or [H,W,C](channels_last) te: edge confidence feature map, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last) te_mask: mask of edge confidence feature map, used for loss caculation, shape [C,H,W,Hnei,Wnei](channels_first) or [H,W,Hnei,Wnei,C](channels_last)
- arg3: Config.DATA
a enum value of enum class Config.DATA dataset_type where the input annotation list from
- arg4String
specify output image name to distinguish.
- arg5String
specify which directory to save the visualized image.
- arg6string
data format speficied for channel order available input: ‘channels_first’: data_shape C*H*W ‘channels_last’: data_shape H*W*C
- Returns
- None