Buscar

ME6105.HW5.CombierDeTenorioDunham

Prévia do material em texto

HW5: Preference Modeling and Optimization
ME 6105
Revisit the decision situation identified in HW2
In this project we attempted to identify the optimal size of the hydraulic systems composing the flight control subsystem of a passenger aircraft. The pool of design alternatives considered for this exercise was constituted of hydraulic systems of varying sizes. The objectives that were considered in this exercise were:
Maximize the controllability of the aircraft
Minimize the operating cost of the aircraft.
Although the controllability of the aircraft would typically be evaluated using a flight dynamics analysis if this exercise were done in industry, the overall controllability of the aircraft was instead quantified using an “Overall Evaluation Criteria” type of equation on the reaction time for each of the control surfaces. The value defined by this equation was the Control_Performance variable which was considered as a surrogate expression for the controllability of the aircraft in this project.
tx : maximum reaction time across all scenarios (normal and failure) on actuated object “X”
AO: Outboard Ailerons	AI: Inboard Ailerons	Sp: Spoilers	Fl: Flaps	Sl: Slats
NG: Nose Gear		MG: Main Gears		El: Elevators	Ru: Rudder
The cost of operation was defined by considering:
Maintenance Cost (cost associated with maintaining the hydraulic systems)
Additional fuel cost (cost of additional fuel burn associated with production of hydraulic power and additional fuel burn associated with flying the weight of hydraulic systems)
The design variables in this project were the sizing factors on hydraulic actuation systems. Those factors were defined as non-dimensional variables which refer to a multiplicative factor on the piston area for actuators. Additionally, these variables also sized the servo nominal flow for actuators relative to the piston area. In the case of the hydraulic motors, the non-dimensional variables sized the flow per rotation and servo nominal flow. First, the piston areas and related design parameters were set to a nominal size which produced decent and reliable results in the Dymola models. The non-dimensional multiplicative factors were then used to modify these nominal values. The sizes of the pumps per network were automatically adjusted according to the flow required by their users.
Overall there were 9 families of objects to be actuated (outboard ailerons, inboard ailerons, spoilers, flaps, slats, elevators, rudder, nose landing gear and main landing gears). In order to simplify the optimization problem, the main landing gear and nose landing gear were not considered for the optimization. Among the remaining seven families, the sizing factors for the aileron and spoilers were grouped under one sizing factor and flaps and slats under another one. 4 design variables were therefore considered:
El_input: sizing factor for elevator
Ru_input: sizing factor for rudder
AL_input: sizing factor for inboard and outboard ailerons and spoilers
FLSL_input: sizing factor for slats and flaps
The design variable range used to define the design space was defined arbitrarily. This design range was used primarily to protect the execution of the Dymola processes. Since the optimizer was guided by the expected utility function, we did not want to expose the optimization process to singularities in the Dymola models. Unrealistic design evaluation could itself lead to part of the space where the Dymola models are unable to converge therefore freezing or at least penalizing the execution of the optimization routine. Although the design variable range was arbitrary, a trial and error process was used to approximately set the range. First, varying values of the design variables were manually run in the Dymola models to determine a safe region for the values. This region was then modified as necessary to allow for suitable optimization without finding an optimum point on the extreme range of the design variables. The final range used for the design variables was 0.8 to 1.4.
Elicit Preferences – Utility Function
Once the design situation had been revisited and appropriately set up for this problem, it was time to elicit the utility function preferences. This step was done by two members of the group (Joel and Robert) and the results were combined for the final functions.
For Joel, the elicitation was as follows. The first step was to elicit the individual utility curves for the cost and performance utilities. In terms of the cost utility, the range was set as $27000 - $35000 based on the uncertainty analysis from the previous homework. Obviously, $27000 had a utility of 1 while $35000 had a utility of 0 since the objective was to minimize cost. Furthermore, since a lower cost is much more preferred to a higher cost (also in terms of risk), the S-curve is skewed towards the low cost end with $31000 (the middle point) only having a utility of 20%. The 50% utility was chosen to be at $30000 since that cost would be acceptable but also be a preferred tradeoff to having a 50/50 chance of ending up with a $35000 cost. 
Figure 1: Cost utility function
The second utility curve elicited was the control performance curve. In this case, the range was between 92 and 97 based on the previous uncertainty analysis. Since Joel is an aerospace controls engineer and has prior experience, his preferences were highly skewed towards the upper end of the control performance range, with a 50/50 chance tradeoff being set at 95.5 (the midpoint of the range is 94.5). The rest of the elicitations were set to produce a smooth curve with 80% set at 96 and 20% set at 94. 
Figure 2: Control performance utility function.
In both individual utility curves, the result is a combination of risk-seeking and risk-adverse behavior, although the risk-adverse behavior has a larger effect.
The next step was to elicit the multi-attribute utility relationship between these two individual utilities previously discussed. For this task, the 2 attribute utility trades Excel spreadsheet provided by Dr. Paredis was utilized. First, the 4 reference points were set with 1 as the 0/1 point for the control performance and cost utilities, respectively; 1 as the 1/0 point for the control performance and cost utilities, respectively; and 2 as the center points for both utilities. Next, the elicitation points had to be chosen. The following questions were asked:
If the Overall Cost is $31000, what would be the required control performance? The answer was 96 since a higher control performance is preferred.
If the cost is $28000, what is the required control performance? The answer was 95 since the lower cost compensated for a reduced control, but good control was still preferred.
If the cost is $34000, what is the required control? The answer was 96.9 since the cost was nearly the maximum value and a high control performance was needed as compensation. 
Finally, the question was asked: for the best control available (97), what cost would be acceptable? The answer was chosen as $35000 since the $36000 (maximum cost) was still not acceptable.
Based on the elicitation points, the utility function was calculated as 
TotalUtility=0.571708*ControlPerformance+0.570154*Cost-0.14186*ControlPerformance*Cost.
Based on the elicitation, this function was expected. There was not a high dependence placed on a single attribute, since both attributes were important, thus explaining the small difference between 0.571708 and 0.570154 (this difference is basically zero). Furthermore, the cross value was calculated as -00.14186 indicating that the attribute utilities are substitutes for each other. Again, this result was expected due to the nature of the attributes and the choices made during the elicitation.
The elicitation done by Robert ended with approximately the same results as for Joel. Both of the individual utility curves were similarly S-shaped with the maximum utility for costset to the minimum cost and the maximum utility for control set to the maximum control value. As expected, the multi-attribute utility theory elicitation again yielded a substitution relationship between the two variables. Since a better control system (more responsive due to stronger actuators, etc) would have a higher cost, this relationship seemed natural. 
�
Explore the Design Space
In order to explore the design space a 4 level full factorial experiment was used. This initial exploration was accomplished using a deterministic model. Furthermore, the exploration was performed in 2 steps for this project. 
As described in section 1 of this report, the range of the design variables was arbitrarily limited. The design space exploration provided us with a method to understand the topology of our space and how those arbitrary ranges were constraining the optimization exercise. 
Initially the design space was limited from 0.8 to 1.2. The 10 best design points identified by the full factorial experiment are listed in the table below:
	Rank
	EL
	Ru
	AL
	FLSL
	Total_Utility
	1
	1.2
	1.2
	1.2
	1.2
	0.650
	2
	1.2
	1.2
	1.2
	1.07
	0.649
	3
	1.2
	1.2
	1.2
	0.93
	0.648
	4
	1.2
	1.2
	1.2
	0.8
	0.646
	5
	1.2
	1.07
	1.2
	1.2
	0.639
	6
	1.2
	1.07
	1.2
	1.07
	0.636
	7
	1.2
	1.07
	1.2
	0.93
	0.634
	8
	1.07
	1.2
	1.2
	1.2
	0.634
	9
	0.93
	1.07
	1.2
	1.2
	0.631
	10
	1.07
	1.2
	1.2
	1.07
	0.631
Table 1: First design space exploration.
Since the global maximum of these points is on the border of all design variable ranges, we decided to reconsider the design variable boundaries. Although these results were a good indication that a better design could be found by using larger boundaries, care had to be taken to ensure the stability of the Dymola models.
By manually exploring the design beyond the 1.2 limit on the variable range, we were able to observe that the stability of models was satisfactory. As a result, the space was redefined to be between 0.8 and 1.4. Typically, this upper limit would be determined by other interaction factors. For example, as the actuators are increased in size, their volume affects the aerodynamics of the wing and therefore the performance of the aircraft which ultimately would represent an additional fuel burn. Since this effect and other similar effects are not being included in this model, it was assumed that the factor of 1.4 is the maximum size that the actuator can take without affecting the wing. 
A new design space exploration was subsequently performed utilizing the broadened design variable ranges. The 8 best designs are listed in the table below:
	Rank
	EL
	Ru
	AL
	FLSL
	Total_Utility
	1
	1
	1.4
	1.2
	1.2
	0.649657
	2
	1.2
	1.2
	1.2
	1.2
	0.64963
	3
	1.2
	1.2
	1.2
	1.4
	0.649621
	4
	1
	1.4
	1.2
	1.4
	0.649524
	5
	1.2
	1.2
	1.2
	1
	0.648669
	6
	1.4
	1
	1.2
	1.2
	0.647017
	7
	1.4
	1
	1.2
	1.4
	0.646995
	8
	1.2
	1.4
	1.2
	0.8
	0.646626
Table 2: Design space exploration with widened borders.
The fact that the second best design corresponds to the best design in the previous space also indicates that the range constraint corner was nearly coinciding with a local optimum; therefore, the relaxation of the constraints did not allow significant improvements. The primary improvement due to the relaxation of the constraints was to show that the optimization was no longer being arbitrarily constrained. 
This design exploration also gave us the opportunity to observe utility values obtained in the space as shown in the following figure. 
Figure 3: Utility dimensions
In the graph above each point represents an individual evaluation in the design of experiment used to explore the design space. We can see that the range of the attribute of performance has been covering most of the utility function range of sensitivity. The upper limit of the utility range for performance was reached. The fact that the best design lays close to the maximum utility value on performance indicates that utility associated with the performance attribute dominated the utility associated with cost.
Figure 4: Overall utility visualization
The 3D representation of the space confirms the observation made on the previous graph. The best design lies close to the limit of utility with respect of control performance. Also we can see that given the layout of the space, the best design has a poor cost-utility. 
�
Solve the Design Problem Deterministically
After exploring the design space, the design problem was solved deterministically using the DOT optimizer available in ModelCenter. The optimizer was run with both 1.2 and 1.4 as the maximum value for the design variables (before and after it was discovered that the design space was too limited). As in the case of the design space exploration, the runs before widening the borders of the design space terminated with the maximum at approximately {1.2, 1.2, 1.2, 1.2} indicating that the design space was too limited. In this case, the design variables were {Elevator actuator size, Rudder actuator size, Aileron actuator size, Flap/Slat actuator size}. Once the design space was widened, the deterministic optimization was run twice: once from the optimum of the design space exploration and once from a point near the second peak of the design space exploration. In all deterministic optimization runs for this problem, the solution converged near the starting point. This result indicates two main points of interest. First, since the optimization was being started near the design space exploration peaks, the optimization clearly would not have far to run to find the optimal point. Second, this result indicates one of two possibilities: either the gradient around the maximum is very low (i.e. there is a global maximum with a low gradient nearby) or there are several local minima to which the optimization was pointing. Based on the design space exploration and the optimization, it appears as though the first option is more correct: there is very little gradient around the maximum point, resulting in the small optimization movement. However, it still appears as though there are several local minima with small gradients in the design space.
The design variables used during the second run were, as previously stated, {Elevator actuator size, Rudder actuator size, Aileron actuator size, Flap/Slat actuator size}. When started from the optimal design exploration point, the optimization resulted in {1.0123, 1.4, 1.2045, 1.200}. It can be noticed that one of the design variables (Rudder sizing) is still at the border of the design space. Since control performance seems to have a large effect on utility in this problem, this situation does make sense. Interestingly, the optimization prefers the nominal size for the Elevator (first design variable) and a slightly larger size for the ailerons and flaps/slats (third and fourth design variables, respectively). In general, these values seem reasonable due since a slight emphasis was placed on control performance. Therefore, it was expected that the optimization results would chose actuator sizes larger than or equal to the nominal size. 
The second optimization run was started at {0.9, 1.3, 1.2, 1.4}. The run ended at {0.920, 1.325, 1.260, 1.296}. Only the third value is moving away from the optimal point found previously. Note that the utility for this optimization is 0.6489 while the previous optimization utility was 0.6502. Therefore, it appears that the observation that there are several local minima with low gradients between them is accurate. 
�
Solve the Design Problem under Uncertainty
Optimization under uncertainty is computationally intensive exercise. Every time the optimizer made a function call, the Latin Hypercube Sampling (LHS) analysis must have launched cases modeling the uncertainty. The LHS evaluations required significantamounts of computations. It was observed during the deterministic optimization processes that an optimization process can make hundreds of function calls during its search. Since those call should be multiplied with the number of LHS cases to estimate the number of evaluation that should be carried during the process, we can see that the optimization process is about 2 orders of magnitude longer than the LHS analysis. 
The addition of the noise variables caused the response to vary from sample to sample. It was therefore insufficient to optimize on a single sample, because the resulting value would be randomly skewed and inconsistent. To smooth out the samples, n of them were randomly selected and averaged together. The result was a sample estimate of the mean of each value of the response. This averaged curve was then passed to the optimization algorithm and the desired optimum was obtained.
In order to reduce the time required to run the optimization, and allow the optimization to occur, it was necessary to design an LHS evaluation such that the process would be relatively quick and the output approximately deterministic for the optimization to be performed efficiently. As suggested by the homework assignment instructions, the first step was to set the number of samples. This value was initially set low (five) and then increased to see how the optimization changed until it would converge. The second step was to set the LHS evaluations such that the same seed values for the random value selections were used at each step of the optimization run. This setting has two primary results. First, the output from the LHS routine was deterministic since the “random” selection of points is now identical at each step. Second, this setting introduced a bias into the LHS output. However, the alternative was that the optimizer would not converge since the randomness of the LHS routine would most likely be large enough to offset the changes in utility that the optimizer was finding. Therefore, it was considered an acceptable substitution. Once these changes had been made, the optimization algorithm could obtain the desired result.
The sample optimum was then used as the starting point for a new iteration and therefore a new set of noise-affected samples. The collection of sample optimums formed a path and it was expected that the pathways would converge to a noise-robust, global optimum. However, it was found that the number of samples selected to be averaged (n) affected the response. The figure below shows the pathways for expected Utility and their dependence on number of averaged samples.
Looking at the pathways within the variables themselves, it was also apparent that another phenomenon was at work. It appeared that the optimization was seeking out two different solutions in each of the parametric dimensions. A table of the two points reached and their associated parameters is shown below.
	Number Samples (n)
	Elevator
	Rudder
	Aileron
	Flap/Slat
	Expected Utility
	20
	1.212
	1.389
	1.157
	1.079
	0.660
	100
	1.272
	1.400
	1.251
	1.073
	0.662
After rigorous investigation, it was found that an error had been made in the beginning of a new iteration( the optimizer had failed to continue updating the start point. 
It is important to note that the optimizer never swung far from its original values; this behavior was also noticed in the deterministic optimization. The range of the values passed within each parameter remained fairly tight (about +/- 15%).
�
�
Perform Sensitivity Analysis
Next, the optimization results were tested for sensitivity by including epistemic uncertainty in the model. This sensitivity analysis gives an indication of the robustness of the design with respect to the uncertainty. Thus, if there is little or no sensitivity with respect to the epistemic uncertainty, then it can be concluded that the design is robust enough to be close to optimal even while operating in extreme regions of the given range. First, it was necessary to determine the variables with the greatest epistemic uncertainty. During the uncertainty analysis, it was noted that the hinge moment variables had the greatest uncertainty in their elicitations since the values were based on private aircraft such as Cessnas which may or may not be comparable to passenger aircraft. Furthermore, the Chd_Ru (rudder) and Chd_El (elevator) had the greatest effect of the hinge moment variables. Therefore, it was decided to include epistemic uncertainty for these two variables. The results are shown in Table 1.
	
	
	Sensitivity
	Analysis
	
	
	Variable
	Max/Min
	Elevator
	Rudder
	Aileron
	Flap / Slat
	Chd_El
	Max
	1.104
	1.386
	1.175
	1.215
	Chd_El
	Min
	1.120
	1.397
	1.278
	1.213
	Chd_Ru
	Max
	1.023
	1.214
	1.274
	1.149
	Chd_Ru
	Min
	1.119
	1.399
	1.242
	1.214
	
	
	Corresponding
	Optimum
	
	
	100 samples
	
	1.272
	1.400
	1.251
	1.073
	
	
	
	
	
	
Table 3: Sensitivity analysis results.
There were two main points to notice in the sensitivity analysis. First, it was initially expected that the design variable choices for the max and min analysis of each uncertain variable considered would be near, or possibly span, the design variable choices for the corresponding optimum point. However, that expectation was obviously inaccurate. The most likely cause of the optimum design variable choices being quite different from the sensitivity analysis design variable choices was the change in seed values of the LHS evaluations between the optimization runs. It appeared as though there was a sizable bias present due to the fixed seed values in each optimization run. Therefore, for the purposes of the sensitivity analysis, the max and min evaluations of each uncertain variable will be compared without regard for their relationship to the optimum design variable choices. 
Based on the uncertainty analysis, the epistemic uncertainty did have a noticeable effect on the end result of the design variables. Most notably, Chd_Ru maximum affected Elevator actuator size, Rudder actuator size, and Flap / Slat actuator size significantly while Chd_El maximum affects Aileron actuator size significantly. However, since these design variables were sizing multipliers near one, the actual effects on the piston sizes, etc will not be very large indicating that the design is fairly robust to the epistemic uncertainty in these variables. It should also be noted that based on previous analyses, it appeared that the LHS fixed seed value bias may be affecting the output of the optimization runs. Therefore, the results of the sensitivity analysis may not be entirely due to the epistemic uncertainty included in the model. 
�
Lessons Learned
Understanding the topology of the design space (range on design variables)
The exploration of the design space posed a very interesting problem for us. In section 3, we already described the situation in length. However, it was quite interesting to attempt to understand the topology of the space in four dimensions. Visualization of the process become daunting once there are more than three dimensions. Additionally, it was instructive to attempt to reconcile the physics of the problem with the model output. Since the model contains many assumptions, there were results which should have been limited by physical phenomena, but were not since those effects were not included. For example, the tendency of the optimizer to move toward the upper limits of the design ranges required a re-evaluation of our assumptions. First, it was necessary to decide which physical phenomena would most likely cause this trend and which phenomena should limit the trend in order to prevent the divergence. In our case the tendency of the optimizer to grow the size of some actuator might be constrained at a lower value if the installation effect were considered. However, it was determined that theseeffects were beyond the scope of our project; this choice resulted in the assumptions being maintained.
Load balancing of processes
The execution of our model takes approximately 60 seconds when executed on a single machine. The performance of the optimization process under uncertainty would have required a very long run time. In the past homework set (uncertainty analysis), parallelization could easily be achieved by dispatching cases of the design of experiments to different computers. In an optimization process, the result of a given case is used to define the next case. Therefore the execution can not be parallelized at the model level. The parallelization must occur at the process level. If we look at the structure of the model as shown in Figure 5, we can see that the 9 executable processes (Dymola executables) can be run independently. This independence enables their parallel execution. This fact is great since out of the 60 seconds overall execution time, 58-59 seconds are utilized by those processes. Of course, the 9 Dymola processes require different runtime (between 2 and 9 seconds for some of the processes). Therefore even though 9 computers are available there is no ability to cut down the execution time down to 6.7 seconds. The best execution time that can theoretically be attained is approximately 10 seconds per model evaluation. Two approaches were used to make the parallelization. Both approaches relied on a group of 9 networked computers. 
�
Figure 5: Structure of the model
The first approach was based on Centerlink. By integrating models from the Centerlink server, the server dispatched the execution of the models over the 9 machines. The use of Centerlink decreased the execution time from 60 down to 30 seconds. The factor of two decrease in runtime was disappointing since 9 machines were available. We believe that this poor runtime improvement was due to two facts:
The transmission time (for each execution, Centerlink sends the model to the computer, launches the execution and remove the files from the machine). This overhead associated with those tasks is quite significant
The load balancing information used by Centerlink does not yield optimal parallelization. The computers used to support the trades were Pentium Ds which claims that 2 processes can be run in parallel on the same station. In fact the double executions are significantly slower that single. So instead of using the 9 available computers only 4 or 5 computers were used to perform the execution.
The second approach was a manual setup of the allocation of the processes to the machines. Since there were 9 machines for 9 processes, one process was performed by each machine. Using Analysis server, this manual setup was created. Doing so enabled us to cut down the execution time to 11-12 seconds. This approach enabled an almost ideal parallelization of the processes. The only drawback was the slight overhead associated with the transmission of the information. 
Comparison of probabilistic optimization runs
As was noted several times during the evaluation of the probabilistic optimization and the sensitivity analysis, there does not appear to be very good consistency between the probabilistic optimization runs, regardless of the number of samples used in the LHS evaluation; this situation occurs provided the number of samples is reasonable for a limited time optimization. The option to use the same seed values for the design variables at each iteration of the optimization works well to allow the optimization to be performed correctly, but this option seems to produce a bias between runs which would need to be factored into the overall analysis to obtain better comparison between runs and better evaluation of the sensitivity analysis. The best option for the sensitivity analysis would be to run the analysis using a deterministic optimization for a better evaluation of the epistemic uncertainty. Indeed, this method may have been the desired method for this homework assignment; however, since the sensitivity analysis was included after the probabilistic optimization, it was interpreted as necessary to perform the analysis on a probabilistic optimization basis. 
�PAGE �20�
Cyril de Tenorio
Joel Dunham
Robert Combier
_1270799327.unknown
_1271103447.xls
Chart1
		0.6644399943		0.6515316462		0.659249861		0.6573832694		0.6210485152		0.6615927443
		0.6649116777		0.6588127198		0.6592604183		0.6623450115		0.6608660914		0.661639613
		0.6655933955		0.6594475499		0.6592604183		0.6623012948		0.6609874259		0.6616406869
		0.6662789854		0.6599845246				0.6623952365		0.6609874259
		0.6659256922		0.6599659269
		0.6666392989		0.6601092808
10 samples
20 samples
40 samples path 1
100 samples path 1
40 samples path 2
100 samples path 2
Rank
Expected Utility
Sheet1
		
ProbOpt10
		Samples						EL		RU		AL		FLSL		Expected Utility
		10		1		0		1		1.4		1.2		1.2		0.6644399943
		10		2		0		1.1192245076		1.4		1.2200383242		1.196064995		0.6649116777
		10		3		0		1.1175450656		1.4		1.2338031349		1.197159732		0.6655933955
		10		4		0		1.1217118494		1.3988549187		1.2413386121		1.1981767401		0.6662789854
		10		5		0		1.1198618385		1.3988549187		1.240138612		1.1978449455		0.6659256922
		10		6		0		1.1205912581		1.3988549187		1.2413386121		1.1981767401		0.6666392989
		20		6		0		1.1206		1.399		1.2401		1.19784		0.6515316462
		20		7		0		1.2398552922		1.399		1.195398113		1.0564279726		0.6588127198
		20		8		0		1.2373721609		1.3881946608		1.1793662924		1.0545911108		0.6594475499
		20		9		0		1.2134987319		1.3894680287		1.1571831871		1.0785666945		0.6599845246
		20		10		1		1.21126241		1.3880690287		1.1559836793		1.07929258		0.6599659269
		20		11		1		1.2119932261		1.3890674431		1.1568397235		1.0787745424		0.6601092808
		40		6		1		1.12059		1.39885		1.24134		1.19818		0.659249861
		40		7		1		1.1204128586		1.39885		1.2400986599		1.1978394206		0.6592604183
		40		8		1		1.1204128586		1.39885		1.2400986599		1.1978394206		0.6592604183
		100		8		1		1.12059		1.39885		1.24134		1.19818		0.6573832694
		100		9		1		1.1244773863		1.39885		1.2772809411		1.2148412296		0.6623450115
		100		10		1		1.122340841		1.39885		1.2760396011		1.2145507784		0.6623012948
		100		11		1		1.1233540322		1.39885		1.2772809411		1.2148412296		0.6623952365
		40		11		2		1.211993		1.389067		1.15684		1.078775		0.6210485152
		40		12		2		1.269593167		1.4		1.2507960468		1.0740248438		0.6608660914
		40		13		2		1.2698995817		1.3999977387		1.2495491999		1.0744879653		0.6609874259
		40		14		2		1.2698995817		1.3999977387		1.2495491999		1.0744879653		0.6609874259
		100		14		2		1.26959		1.4		1.2508		1.07402		0.6615927443
		100		15		2		1.2705244672		1.4		1.2398753904		1.0766181568		0.661639613
		100		16		2		1.2715701667		1.4		1.2508		1.0732103182		0.6616406869
ProbOpt10
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples path 1
100 samples path 1
40 samples path 2
100 samples path 2
Rank
Expected Utility
		0		0		0		0
		0		0		0		0
		0		0		0		0
		0		0				0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
EL
EL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
RU
EL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
`
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
AL
EL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples100 samples
40 samples path 2
100 samples path 2
FLSL
EL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
EL
RU
		0		0		0		0
		0		0		0		0
		0		0		0		0
		0		0				0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
RU
RU
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
AL
RU
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
FLSL
RU
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
EL
AL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
RU
AL
		0		0		0		0
		0		0		0		0
		0		0		0		0
		0		0				0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
AL
AL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
FLSL
AL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
EL
FLSL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
RU
FLSL
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0		0		0		0		0
		0		0				0		0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
40 samples path 2
100 samples path 2
AL
FLSL
		0		0		0		0
		0		0		0		0
		0		0		0		0
		0		0				0
		0		0
		0		0
10 samples
20 samples
40 samples
100 samples
FLSL
FLSL

Outros materiais

Perguntas relacionadas

Perguntas Recentes